Teach handwriting in schools!

I have begun this blogpost many times. I have erased it many times. That is because the subject of today – handwriting – is very sensitive for me. Most of what I wrote and subsequently erased was a rant: angry at rimes, full of profanity at other times. The rest were paragraphs that can be easily categorized as pleading, bargaining, imploring to teach handwriting in American schools. Or, if they already do, to do it less chaotically, more seriously, more consistently, with a LOT more practice and hopefully before the child hits puberty.

Because, contrary to most educators’ beliefs, handwriting is not the same as typing. Nor is printing / manuscript writing the same as cursive writing, but that’s another kettle.

Somehow, sometime, a huge disjointment happened between scholarly researchers and educators. In medicine, the findings of researchers tend to take 10-15 years until they start to be believed and implemented in medical practice. In education… it seems that even findings cemented by Nobel prizes 100 years ago are alien to the ranks of educators. It didn’t used to be like that. I don’t know when educators became distrustful of data and science. When exactly did they start to substitute evidence with “feels right” and “it’s our school’s philosophy”. When did they start using “research shows… ” every other sentence without being able to produce a single item, name, citation, paper, anything of said research. When did the educators become so… uneducated. I could write (and rant!) a lot about the subject of handwriting or about what exactly a Masters in Education teaches the educators. But I’m so tired of it before I even begun because I’m doing it for a while now and it’s exhausting. It takes an incredible amount of effort, at least for me, to bring the matter of writing so genteelly, tactfully, and non-threateningly to the attention of the fragile ego of the powers that be in charge of the education of the next generation. Yes, yes, there must be rara aves among the educators who actually teach and do listen to or read papers on education from peer-reviewed journals; but I didn’t find them. I wonder who the research in education is for, if neither the educators nor policy makers have any clue about it.

Here is another piece of education research which will probably go unremarked by the ones it is intended for, i.e. educators and policy makers. Mueller & Oppenheimer (2014) took a closer look at the note-taking habits of 65 Princeton and 260 UCLA students. The students were instructed to take notes in their usual classroom style from 5 x >15 min long TED talks, which were “interesting but not common knowledge” (p. 1160). Afterwards, the subjects completed a hard working-memory task and answered factual and conceptual questions about the content of the “lectures”.

The students who took notes in writing (I’ll call them longhanders) performed significantly better at conceptual questions about the lecture content that the ones who typed on laptops (typers). The researchers noticed that the typers tend to write verbatim what it’s being said, whereas the longhanders don’t do that, which corresponds directly with their performance. In their words,

“laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.” (Abstract).

Because typing is faster than writing, the typers can afford to not think of what they type and be in a full scribe mode with the brain elsewhere and not listening to a single word of the lecture (believe me, I know, both as a student and as a University professor). Contrary to that, the longhanders cannot write verbatim and must process the information to extract what’s relevant. In the words of cognitive psychologists everywhere and present in every cognitive psychology textbook written over the last 70 years: depth of processing facilitates learning. Maybe that could be taught in a Masters of Education…

Pet peeves aside, the next step in the today’s paper was to see if you force the typers to forgo the verbatim note-taking and do some information processing might improve learning. It did not, presumably because “the instruction to not take verbatim notes was completely ineffective at reducing verbatim content (p = .97)” (p. 1163).

The laptop typers did take more notes though, by word count. So in the next study, the researchers asked the question “If allowed to study their notes, will the typers benefit from their more voluminous notes and show better performance?” This time the researchers made 4 x 7-min long lectures on bats, bread, vaccines, and respiration and tested them 1 week alter. The results? The longhanders who studied performed the best. The verbatim typers performed the worst, particularly on conceptual versus factual questions, despite having more notes.

For the sake of truth and in the spirit of the overall objectivity of this blog, I should note that the paper is not very well done. It has many errors, some of which were statistical and corrected in a Corrigendum, some of which are methodological and can be addressed by a bigger study with more carefully parsed out controls and more controlled conditions, or at least using the same stimuli across studies. Nevertheless, at least one finding is robust as it was replicated across all their studies:

“In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand” (Abstract)

Teachers, teach handwriting! No more “Of course we teach writing, just…, just not now, not today, not this year, not so soon, perhaps not until the child is a teenager, not this grade, not my responsibility, not required, not me…”.

157 handwriting - Copy

REFERENCE: Mueller, PA & Oppenheimer, DM (2014). The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6): 1159–1168. DOI: 10.1177/0956797614524581. ARTICLE | FULLTEXT PDF | NPR cover

By Neuronicus, 1 Sept. 2019

P. S. Some of my followers pointed me to a new preregistered study that failed to replicate this paper (thanks, followers!). Urry et al. (2019) found that the typers have more words and take notes verbatim, just as Mueller & Oppenheimer (2014) found, but this did not benefit the typers, as there wasn’t any difference between conditions when it came to learning without study.

The authors did not address the notion that “depth of processing facilitates learning” though, a notion which is now theory because it has been replicated ad nauseam in hundreds of thousands of papers. Perhaps both papers can be reconciled if a third study were to parse out the attention component of the experiments by, perhaps, introspection questionnaires. What I mean is that the typers can do mindless transcription and there is no depth of processing, resulting in the Mueller & Oppenheimer (2014) observation or they can actually pay attention to what they type and then there is depth of processing, in which case we have Urry et al. (2019) findings. But the longhanders have no choice but to pay attention because they cannot write verbatim, so we’re back to square one, in my mind, that longhanders will do better overall. Handwriting your notes is the safer bet for retention then, because your attention component is not voluntary, but required for the task, as it were, at hand.

REFERENCE: Urry, H. L. (2019, February 9). Don’t Ditch the Laptop Just Yet: A Direct Replication of Mueller and Oppenheimer’s (2014) Study 1 Plus Mini-Meta-Analyses Across Similar Studies. PsyArXiv. doi:10.31234/osf.io/vqyw6. FREE FULLTEXT PDF

By Neuronicus, 2 Sept. 2019

Education raises intelligence

Intelligence is a dubious concept in psychology and biology because it is difficult to define. In any science, something has a workable definition when it is described by unique testable operations or observations. But “intelligence” had eluded that workable definition, having gone through multiple transformations in the past hundred years or so, perhaps more than any other psychological construct (except “mind”). Despite Binet’s first claim more than a century ago that there is such a thing as IQ and he has a way to test for it, many psychologists and, to a lesser extent, neuroscientists are still trying to figure out what it is. Neuroscientists to a lesser extent because once the field as a whole could not agree upon a good definition, it moved on to the things that they can agree upon, i.e. executive functions.

Of course, I generalize trends to entire disciplines and I shouldn’t; not all psychology has a problem with operationalizations and replicability, just as not all neuroscientists are paragons of clarity and good science. In fact, the intelligence research seems to be rather vibrant, judging by the publications number. Who knows, maybe the psychologists have reached a consensus about what the thing is. I haven’t truly kept up with the IQ research, partly because I think the tests used for assessing it are flawed (therefore you don’t know what exactly you are measuring) and tailored for a small segment of the population (Western society, culturally embedded, English language conceptualizations etc.) and partly because the circularity of definitions (e.g. How do I know you are highly intelligent? You scored well at IQ tests. What is IQ? What the IQ tests measure).

But the final nail in the coffin of intelligence research for me was a very popular definition of Legg & Hutter in 2007: intelligence is “the ability to achieve goals”. So the poor, sick, and unlucky are just dumb? I find this definition incredibly insulting to the sheer diversity within the human species. Also, this definition is blatantly discriminatory, particularly towards the poor, whose lack of options, access to good education or to a plain healthy meal puts a serious brake on goal achievement. Alternately, there are people who want for nothing, having been born in opulence and fame but whose intellectual prowess seems to be lacking, to put it mildly, and owe their “goal achievement” to an accident of birth or circumstance. The fact that this definition is so accepted for human research soured me on the entire field. But I’m hopeful that the researchers will abandon this definition more suited for computer programs than for human beings; after all, paradigmatic shifts happen all the time.

In contrast, executive functions are more clearly defined. The one I like the most is that given by Banich (2009): “the set of abilities required to effortfully guide behavior toward a goal”. Not to achieve a goal, but to work toward a goal. With effort. Big difference.

So what are those abilities? As I said in the previous post, there are three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Now I might have left you with the impression that intelligence = executive functioning and that wouldn’t be true. There is a clear correspondence between executive functioning and intelligence, but it is not a perfect correspondence and many a paper (and a book or two) have been written to parse out what is which. For me, the most compelling argument that executive functions and whatever it is that the IQ tests measure are at least partly distinct is that brain lesions that affect one may not affect the other. It is beyond the scope of this blogpost to analyze the differences and similarities between intelligence and executive functions. But to clear up just a bit of the confusion I will say this broad statement: executive functions are the foundation of intelligence.

There is another qualm I have with the psychological research into intelligence: a big number of psychologists believe intelligence is a fixed value. In other words, you are born with a certain amount of it and that’s it. It may vary a bit, depending on your life experiences, either increasing or decreasing the IQ, but by and large you’re in the same ball-park number. In contrast, most neuroscientists believe all executive functions can be drastically improved with training. All of them.

After this much semi-coherent rambling, here is the actual crux of the post: intelligence can be trained too. Or I should say the IQ can be raised with training. Ritchie & Tucker-Drob (2018) performed a meta-analysis looking at over 600,000 healthy participants’ IQ and their education. They confirmed a previously known observation that people who score higher at IQ tests complete more years of education. But why? Is it because highly intelligent people like to learn or because longer education increases IQ? After carefully and statistically analyzing 42 studies on the subject, the authors conclude that the more educated you are, the more intelligent you become. How much more? About 1 to 5 IQ points per 1 additional year of education, to be precise. Moreover, this effect persists for a lifetime; the gain in intelligence does not diminish with the passage of time or after exiting school.

This is a good paper, its conclusions are statistically robust and consistent. Anybody can check it out as this article is an open access paper, meaning that not only the text but its entire raw data, methods, everything about it is free for everybody.

155 education and iq

For me, the conclusion is inescapable: if you think that we, as a society, or you, as an individual, would benefit from having more intelligent people around you, then you should support free access to good education. Not exactly where you thought I was going with this, eh ;)?

REFERENCE: Ritchie SJ & Tucker-Drob EM. (Aug, 2018, Epub 18 Jun 2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8):1358-1369. PMID: 29911926, PMCID: PMC6088505, DOI: 10.1177/0956797618774253. ARTICLE | FREE FULLTEXT PDF | SUPPLEMENTAL DATA  | Data, codebooks, scripts (Mplus and R), outputs

Nota bene: I’d been asked what that “1 additional year” of education means. Is it with every year of education you gain up to 5 IQ points? No, not quite. Assuming I started as normal IQ, then I’d be… 26 years of education (not counting postdoc) multiplied by let’s say 3 IQ points, makes me 178. Not bad, not bad at all. :))). No, what the authors mean is that they had access to, among other datasets, a huge cohort dataset from Norway from the moment when they increased the compulsory education by 2 years. So the researchers could look at the IQ tests of the people before and after the policy change, which were administered to all males at the same age when they entered compulsory military service. They saw the increase in 1 to 5 IQ points per each extra 1 year of education.

By Neuronicus, 14 July 2019

Gaming can improve cognitive flexibility

It occurred to me that my blog is becoming more sanctimonious than I’d like. I have many posts about stuff that’s bad for you: stress, high fructose corn syrup, snow, playing soccer, cats, pesticides, religion, climate change, even licorice. So I thought to balance it a bit with stuff that is good for you. To wit, computer games; albeit not all, of course.

An avid gamer myself, those who know me would hardly be surprised that I found a paper cheering StarCraft. A bit of an old game, but still a solid representative of the real-time strategy (RTS) genre.

About a decade ago, a series of papers emerged which showed that first-person shooters and action games in general improve various aspects of perceptual processing. It makes sense because in these games split second decisions and actions make the difference between win or lose, so the games act as training experience for increased sensitivity to cues that facilitate said decisions. But what about games where the overall strategy and micromanagement skills are a bit more important than the perceptual skills, a.k.a. RTS? Would these games improve the processes underlying strategical thinking in a changing environment?

Glass, Maddox, & Love (2013) sought to answer this question by asking a few dozen undergraduates with little gaming experience to play a slightly modified StarCraft game for 40 hours (1 hour per day). “StarCraft (published by Blizzard Entertainment, Inc. in 1998) (…) involves the creation, organization, and command of an army against an enemy army in a real-time map-based setting (…) while managing funds, resources, and information regarding the opponent ” (p. 2). The participants were all female because they couldn’t find enough male undergraduates that played computer games less than 2 hours per day. The control group had to play The Sims 2 for the same amount of time, a game where “participants controlled and developed a single ‘‘family household’’ in a virtual neighborhood” (p.3.). The researchers cleverly modified the StarCraft game in such a way that they replaced a perceptual component with a memory component (disabled some maps) and created two versions: one more complex (full-map, two friendly, two enemy bases) and one less so (half-map, one friendly, one enemy bases). The difficulty for all games was set at a win rate of 50%.

Before and after the game-playing, the subjects were asked to complete a huge battery of tests destined to test their memory and various other cognitive processes. By carefully parsing these out, the authors conclude that “forty hours of training within an RTS game that stresses rapid and simultaneous maintenance, assessment, and coordination between multiple information and action sources was sufficient” to improve cognitive flexibility. Moreover, authors point out that playing on a full-map with multiple allies and enemies is conducive to such improvement, whereas playing a less cognitive resources demanding game, despite similar difficulty levels, was not. Basically, the more stuff you have to juggle, the better your flexibility will be. Makes sense.

My favorite take from this paper though is not only that StarCraft is awesome, obviously, but that “cognitive flexibility is a trainable skill” (p. 5). Let me tell you why that is so grand.

Cognitive flexibility is an important concept in the neuroscience of executive functioning. The same year that this paper was published, Diamond was publishing an excellent review paper in which she neatly identified three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Unlike some old views on the immutability of the inborn IQ, each one of the core and higher-order executive functions can be improved upon with training at any point in life and can suffer if something is not right in your life (stress, loneliness, sleep-deprived or sick). This paper adds to the growing body of evidence showing that executive functions can be trainable. Intelligence, however you want to define it, relies upon executive functions, at least some of them, and perhaps boosting cognitive flexibility might result in a slight increase in the IQ, methinks.

Bottom line: real-time strategy games with huge maps and tons of stuff to do are good for you. Here you go.

154 starcraft - Copy
The StarCraft images, both foreground and background, are copyrighted to © 1998 Blizzard Entertainment.

REFERENCES:

  1. Glass BD, Maddox WT, Love BC. (7 Aug 2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS One, 2;8(8):e70350. eCollection 2013. PMID: 23950921, PMCID: PMC3737212, DOI: 10.1371/journal.pone.0070350. ARTICLE | FREE FULLTEXT PDF
  2. Diamond A (2013, Epub 27 Sept. 2012). Executive Functions. 64:135-68. PMID: 23020641, PMCID: PMC4084861, DOI: 10.1146/annurev-psych-113011-143750. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 15 June 2019

The FIRSTS: Lack of happy events in depression (2003)

My last post focused on depression and it reminded me of something that I keep telling my students and they all react with disbelief. Well, I tell them a lot of things to which they react with disbelief, to be sure, but this one I keep thinking it should not generate such incredulity. The thing is: depressed people perceive the same amount of negative events happening to them as healthy people, but far fewer positive ones. This seems to be counter-intuitive to non-professionals, who believe depressed people are just generally sadder than average and that’s why they see the half-empty side of the glass of life.

So I dug out the original paper who found this… finding. It’s not as old as you might think. Peeters et al. (2003) paid $30/capita to 86 people, 46 of which were diagnosed with Major Depressive Disorder and seeking treatment in a community mental health center or outpatient clinic (this is in Netherlands). None were taking antidepressants or any other drugs, except low-level anxiolytics. Each participant was given a wristwatch that beeped 10 times a day at semi-random intervals of approximately 90 min. When the watch beeped, the subjects had to complete a form within maximum 25 min answering questions about their mood, currents events, and their appraisal of those events. The experiment took 6 days, including weekend.

The results? Contrary to popular belief, people with depression “did not report more frequent negative events, although they did report fewer positive events and appraised both types of events as more stressful” (p. 208). In other words, depressed people are not seeing half-empty glasses all the time; instead, they don’t see the half-full glasses. Note that they regarded both negative and positive events as stressful. We circle back to the ‘stress is the root of all evil‘ thing.

I would have liked to see if the decrease in positive affect and perceived happy events correlates with increased sadness. The authors say that “negative events were appraised as more unpleasant, more important, and more stressful by the depressed than by the healthy participants ” (p. 206), but, curiously, the  mood was assessed with ratings on the feeling anxious, irritated, restless, tense, guilty, irritable, easily distracted, and agitate, and not a single item on depression-iconic feelings: sad, empty, hopeless, worthless.

Nevertheless, it’s a good psychological study with in depth statistical analyses. I also found thought-provoking this paragraph: “The literature on mood changes in daily life is dominated by studies of daily hassles. The current results indicate that daily uplifts are also important determinants of mood, in both depressed and healthy people” (p. 209).

152 depression and lack of happy events - Copy

REFERENCE: Peeters F, Nicolson NA, Berkhof J, Delespaul P, & deVries M. (May 2003). Effects of daily events on mood states in major depressive disorder. Journal of Abnormal Psychology, 112(2):203-11. PMID: 12784829, DOI: 10.1037/0021-843X.112.2.203. ARTICLE

By Neuronicus, 4 May 2019

Apathy

Le Heron et al. (2018) defines apathy as a marked reduction in goal-directed behavior. But in order to move, one must be motivated to do so. Therefore, a generalized form of impaired motivation also hallmarks apathy.

The authors compiled for us a nice mini-review combing through the literature of motivation in order to identify, if possible, the neurobiological mechanism(s) of apathy. First, they go very succinctly though the neuroscience of motivated behavior. Very succinctly, because there are literally hundreds of thousands of worthwhile pages out there on this subject. Although there are several other models proposed out-there, the authors’ new model on motivation includes the usual suspects (dopamine, striatum, prefrontal cortex, anterior cingulate cortex) and you can see it in the Fig. 1.

145 apathy 1 - Copy
Fig. 1 from Le Heron et al. (2018). The red underlining is mine because I really liked how well and succinctly the authors put a universal truth about the brain: “A single brain region likely contributes to more than one process, but with specialisation”. © Author(s) (or their employer(s)) 2018.

After this intro, the authors go on to showcasing findings from the effort-based decision-making field, which suggest that the dopamine-producing neurons from ventral tegmental area (VTA) are fundamental in choosing an action that requires high-effort for high-reward versus a low-effort for low-reward. Contrary to what Wikipedia tells you, a reduction, not an increase, in mesolimbic dopamine is associated with apathy, i.e. preferring a low-effort for low-reward activity.

Next, the authors focus on why are the apathetic… apathetic? Basically, they asked the question: “For the apathetic, is the reward too little or is the effort too high?” By looking at some cleverly designed experiments destined to parse out sensitivity to reward versus sensitivity to effort costs, the authors conclude that the apathetics are indeed sensitive to the reward, meaning they don’t find the rewards good enough for them to move.  Therefore, the answer is the reward is too little.

In a nutshell, apathetic people think “It’s not worth it, so I’m not willing to put in the effort to get it”. But if somehow they are made to judge the reward as good enough, to think “it’s worth it”, they are willing to work their darndest to get it, like everybody else.

The application of this is that in order to get people off the couch and do stuff you have to present them a reward that they consider worth moving for, in other words to motivate them. To which any practicing psychologist or counselor would say: “Duh! We’ve been saying that for ages. Glad that neuroscience finally caught up”.  Because it’s easy to say people need to get motivated, but much much harder to figure out how.

This was a difficult write for me and even I recognize the quality of this blogpost as crappy. That’s because, more or less, this paper is within my narrow specialization field. There are points where I disagree with the authors (some definitions of terms), there are points where things are way more nuanced than presented (dopamine findings in reward), and finally there are personal preferences (the interpretation of data from Parkinson’s disease studies). Plus, Salamone (the second-to-last author) is a big name in dopamine research, meaning I’m familiar with his past 20 years or so worth of publications, so I can infer certain salient implications (one dopamine hypothesis is about saliency, get it?).

It’s an interesting paper, but it’s definitely written for the specialist. Hurray (or boo, whatever would be your preference) for another model of dopamine function(s).

REFERENCE: Le Heron C, Holroyd CB, Salamone J, & Husain M (26 Oct 2018, Epub ahead of print). Brain mechanisms underlying apathy. Journal of Neurology, Neurosurgery & Psychiatry. pii: jnnp-2018-318265. doi: 10.1136/jnnp-2018-318265. PMID: 30366958 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 24 November 2018

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCES:

1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion, another cognitive bias (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is that MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others, a.k.a. the ‘above-average effect’.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps of the scanner. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data (meaning the answers to the questionnaires) showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding to its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram: Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except for a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is correct only in certain circumstances. Because only in particular conditions this illusion of control applies and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials who, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it ain’t that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Apparently, scientists don’t know the risks & benefits of science

If you want to find out how bleach works or what keeps the airplanes in the air or why is the rainbow the same sequence of colors or if it’s dangerous to let your kid play with snails would you ask a scientist or your local priest?

The answer is very straightforward for most of the people. Just that for a portion of the people the straightforwardness is viewed by the other portion as corkscrewedness. Or rather just plain dumb.

Cacciatore et al. (2016) asked about 5 years ago 2806 American adults how much they trust the information provided by religious organizations, university scientists, industry scientists, and science/technology museums. They also asked them about their age, gender, race, socioeconomic status, income as well as about Facebook use, religiosity, ideology, and attention to science-y content.

Almost 40% of the sample described themselves as Evangelical Christians, one of the largest religious group in USA. These people said they trust more their religious organizations then scientists (regardless of who employs these scientists) to tell the truth about the risks and benefits of technologies and their applications.

The data yielded more information, like the fact that younger, richer, liberal, and white people tended to trust scientists more then their counterparts. Finally, Republicans were more likely to report a religious affiliation than Democrats.

I would have thought that everybody would prefer to take advice about science from a scientist. Wow, what am I saying, I just realized what I typed… Of course people are taking health advice from homeopaths all the time, from politicians rather than environment scientists, from alternative medicine quacks than from doctors, from no-college educated than geneticists. From this perspective then, the results of this study are not surprising, just very very sad… I just didn’t think that the gullible people can also be grouped by political affiliations. I though the affliction is attacking both sides of an ideological isle in a democratic manner.

Of course, this is a survey study, therefore a lot more work is needed to properly generalize these results, from expanding the survey sections (beyond the meager 1 or 2 questions per section) to validation and replication. Possibly, even addressing different aspects of science because, for instance, climate change is a much more touchy subject than, say, apoptosis. And replace or get rid of the “Scientists know best what is good for the public” item; seriously, I don’t know any scientist, including me, who would answer yes to that question. Nevertheless, the trend is, like I said, sad.

107-copy

Reference:  Cacciatore MA, Browning N, Scheufele DA, Brossard D, Xenos MA, & Corley EA. (Epub ahead of print 25 Jul 2016). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science. PMID: 27458117, DOI: 10.1177/0963662516661090. ARTICLE | NPR cover

By Neuronicus, 7 December 2016

Save

Save

Earliest memories

I found a rather old-ish paper that attempts to settle a curiosity regarding human memory: how far back can we remember?

MacDonald et al. (2000) got 96 participants to fill a 15-minute questionnaire about their demographics and their earliest memories. The New Zealand subjects were in their early twenties, a third of Maori descent, a third of European descent and the last third of Asian descent.

The Maori had the earliest memories, some of them as early as before they turned 1 year old, though the mean was 2 years and 8 months. Next came the Europeans with the mean of 3 years and a half, followed by the Asians with the mean of 4 and 9 months. Overall, most memories seem to occur between 3 and 4 years. There was no difference in gender except for the Asian group where the females reported much later memories, around 6 years.

The subjects were also required to indicate the source of the memory as being personal recollection, family story or photographs. About 86% reported it as personal recollection. The authors argue that even without the remaining 14% the results looks the same. I personally would have left those 14% out if they really don’t make a difference, it would have made the results much neater.

There are a few caveats that one must keep in mind with this kind of studies, the questionnaire studies. One of them is the inherent veracity problem: you rely on human honesty because there is no way to check the data for truth. The fact that the memory may be true or false would not matter for this study, but whether is a personal recollection or a family story would matter. So take the results at face value. Besides, human memory is extremely easy to manipulate, therefore some participants may actually believe that they ‘remember’ an event when in fact it was learned much later from relatives. I also have very early memories and while one of them I believe was told ad nauseam by family members at every family gathering so many times that I incorporated it as actual recollection, there are a couple that I couldn’t tell you for the life of me whether I remember them truly or they too have been subjected to family re-reminiscing.

Another issue might be the very small sample sizes with sub-groups. The authors divided their participants in many subgroups (whether they spoke English first, whether they were raised mainly by the mother etc.) that some subgroups ended up having 2 or 3 members, which is not enough to make a statistical judgement. Which also leads me to multiple comparisons adjustments, which should be more visible.

So not exactly the best paper ever written. Nevertheless, it’s an interesting paper in that even if it doesn’t really establish (in my opinion) when do most people have their earliest true memories, it does point to cultural differences in individuals’ earliest recollections. The authors speculate that that may be due to the emphasis put on detailed stories about personal experiences told by the mother in the early years in some cultures (here Maori) versus a lack of these stories in other cultures (here Asian).

105-copy

Reference: MacDonald S, Uesiliana K, & Hayne H. (Nov 2000). Cross-cultural and gender differences in childhood amnesia. Memory. 2000 Nov;8(6):365-76. PMID: 11145068, DOI: 10.1080/09658210050156822. ARTICLE | FULLTEXT PDF

By Neuronicus, 28 November 2016

Save

Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.

sparx1-copy

Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

The FIRSTS: Theory of Mind in non-humans (1978)

Although any farmer or pet owner throughout the ages would probably agree that animals can understand the intentions of their owners, not until 1978 has this knowledge been scientifically proven.

Premack & Woodruff (1978) performed a very simple experiment in which they showed videos to a female adult chimpanzee named Sarah involving humans facing various problems, from simple (can’t reach a banana) to complex (can’t get out of the cage). Then, the chimps were shown pictures of the human with the tool that solved the problem (a stick to reach the banana, a key for the cage) along with pictures where the human was performing actions that were not conducive to solving his predicament. The experimenter left the room while the chimp made her choice. When she did, she rang a bell to summon the experimenter back in the room, who then examined the chimp’s choice and told the chimp whether her choice was right or wrong. Regardless of the choice, the chimp was awarded her favorite food. The chimp’s choices were almost always correct when the actor was its favourite trainer, but not so much when the actor was a person she disliked.

Because “no single experiment can be all things to all objections, but the proper combination of results from [more] experiments could decide the issue nicely” (p. 518), the researchers did some more experiments which were variations of the first one designed to figure out what the chimp was thinking. The authors go on next to discuss their findings at length in the light of two dominant theories of the time, mentalism and behaviorism, ruling in favor of the former.

Of course, the paper has some methodological flaws that would not pass the rigors of today’s reviewers. That’s why it has been replicated multiple times in more refined ways. Nor is the distinction between behaviorism and cognitivism a valid one anymore, things being found out to be, as usual, more complex and intertwined than that. Thirty years later, the consensus was that chimps do indeed have a theory of mind in that they understand intentions of others, but they lack understanding of false beliefs (Call & Tomasello, 2008).

95chimpToM - Copy

References:

1. Premack D & Woodruff G (Dec. 1978). Does the chimpanzee have a theory of mind? The Behavioral and Brain Sciences, 1 (4): 515-526. DOI: 10.1017/S0140525X00076512. ARTICLE

2. Call J & Tomasello M (May 2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5): 187-192. PMID: 18424224 DOI: 10.1016/j.tics.2008.02.010. ARTICLE  | FULLTEXT PDF

By Neuronicus, 20 August 2016

Cats and uncontrollable bursts of rage in humans

 

That many domestic cats carry the parasite Toxoplasma gondii is no news. Nor is the fact that 30-50% of the global population is infected with it, mainly as a result of contact with cat feces.

The news is that individuals with toxoplasmosis are a lot more likely to have episodes of uncontrollable rage. It was previously known that toxoplasmosis is associated with some psychological disturbances, like personality changes or cognitive impairments. In this new longitudinal study (that means a study that spanned more than a decade) published three days ago, Coccaro et al. (2016) tested 358 adults with or without psychiatric disorders for toxoplasmosis. They also submitted the subjects to a battery of psychological tests for anxiety, impulsivity, aggression, depression, and suicidal behavior.

The results showed that the all the subjects who were infected with T. gondii had higher scores on aggression, regardless of their mental status. Among the people with toxoplasmosis, the aggression scores were highest in the patients previously diagnosed with intermittent explosive disorder,

 

a little lower in patients with non-aggressive psychiatric disorders, and finally lower (but still significantly higher than non-infected people) in healthy people.

The authors are adamant in pointing out that this is a correlational study, therefore no causality direction can be inferred. So don’t kick out you felines just yet. However, as CDC points out, a little more care when changing the cat litter or a little more vigorous washing of the kitchen counters would not hurt anybody and may protect against T. gondii infection.

angry cat - Copy (2)

Reference: Coccaro EF, Lee R, Groer MW, Can A, Coussons-Read M, & Postolache TT (23 march 2016). Toxoplasma gondii Infection: Relationship With Aggression in Psychiatric Subjects. The Journal of Clinical Psychiatry, 77(3): 334-341. doi: 10.4088/JCP.14m09621. Article Abstract | FREE Full Text | The Guardian cover

By Neuronicus, 26 March 2016

Younger children in a grade are more likely to be diagnosed with ADHD

AHDH immaturity - Copy.jpgA few weeks ago I was drawing attention to the fact that some children diagnosed with ADHD do not have attention deficits. Instead, a natural propensity for seeking more stimulation may have led to overdiagnosing and overmedicating these kids.

Another reason for the dramatic increase in ADHD diagnosis over the past couple of decades may stem in the increasingly age-inappropriate demands that we place on children. Namely, children in the same grade can be as much as 1 year apart in chronological age, but at these young ages 1 year means quite a lot in terms of cognitive and behavioral development. So if we put a standard of expectations based on how the older children behave, then the younger children in the same grade would fall short of these standards simply because they are too immature to live up to them.

So what does the data say? Two studies, Morrow et al. (2012) and Chen et al. (2016) checked to see if the younger children in a given grade are more likely to be diagnosed with ADHD and/or medicated. The first study was conducted in almost 1 million Canadian children, aged 6-12 years and the second investigated almost 400,000 Taiwanese children, aged 4-17 years.

In Canada, the cut-off for starting school in Dec. 31. Which means that in the first grade, a child born in January is almost a year older that a child born in December. Morrow et al. (2012) concluded that the children born in December were significantly more likely to receive a diagnosis of ADHD than those born in January (30% more likely for boys and 70% for girls). Moreover, the children born in December were more likely to be given an ADHD medication prescription (41% more likely for boys and 77% for girls).

In Taiwan, the cut-off date for starting school in August 31. Similar to the Canadian study, Chen et al. (2016) found that the children born in August were more likely to be diagnosed with ADHD and receive ADHD medication than the children born in September.

Now let’s be clear on one thing: ADHD is no trivial matter. It is a real disorder. It’s an incredibly debilitating disease for both children and their parents. Impulsivity, inattention and hyperactivity are the hallmarks of almost every activity the child engages in, leading to very poor school performance (the majority cannot get a college degree) and hard family life, plus a lifetime of stigma that brings its own “gifts” such as marginalization, loneliness, depression, anxiety, poor eating habits, etc.

The data presented above favors the “immaturity hypothesis” which posits that the behaviors expected out of some children cannot be performed not because something is wrong with them, but because they are simply too immature to be able to perform those behaviors. That does not mean that every child diagnosed with ADHD will just grow out of it; the researchers just point to the fact that ignoring the chronological age of the child coupled with prematurely entering a highly stressful and demanding system as school might lead to ADHD overdiagnosis.

Bottom line: ignoring the chronological age of the child might explain some of increase in prevalence of ADHD by overdiagnostication (in US alone, the rise is from 6% of children diagnosed with ADHD in 2000 to 11-15% in 2015).

References:

  1. Morrow RL, Garland EJ, Wright JM, Maclure M, Taylor S, & Dormuth CR. (17 Apr 2012, Epub 5 Mar 2012). Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children. Canadian Medical Association Journal, 184 (7), 755-762, doi: 10.1503/cmaj.111619. Article | FREE PDF 
  1. Chen M-H, Lan W-H, Bai Y-M, Huang K-L, Su T-P, Tsai S-J, Li C-T, Lin W-C, Chang W-H, & Pan T-L, Chen T-J, & Hsu J-W. (10 Mar 2016). Influence of Relative Age on Diagnosis and Treatment of Attention-Deficit Hyperactivity Disorder in Taiwanese Children. The Journal of Pediatrics [Epub ahead print]. DOI: http://dx.doi.org/10.1016/j.jpeds.2016.02.012 Article | FREE PDF

By Neuronicus, 14 March 2016

Learning chess can improve math skills

chess - Copy

Twenty-two years ago to the day, on January 30, 1994, Peter Leko became the world’s youngest chess grandmaster, at the age of 14.

A proficiency in chess is often linked with higher intelligence, that is, the more intelligent you are, the more likely to be good at chess. This assumption has roots probably in the observation that chess does not allow for random chance or physical attributes, as most games do. So it follows that of you are good at it, it must be… intelligence, although there are at least an equal number of studies if not more that show that practice has more an impact on your chess ability that your native IQ score.

Personally, as one that always looks askance whenever there is talk about intelligence quotient and intelligence tests, I have serious doubts that any of these papers measured what they claimed they measured. And that is because I find the construct “intelligence” poorly defined and, as a direct consequence, hard to measure.

That being said, Sala et al. (2015) wanted to see if chess practice can enhance mathematical problem-solving abilities in young students. The authors divided 560 pupils (8 to 11 years old) into two groups: one group received chess training for 10-15 hours (1 or 2 hours per week) and an option to use a chess program, while the other group did not participate in any chess activities. The experiment took 3 months.

Both groups were tested before and after training with a mathematical problem-solving test battery and a chess ability test.

“Results show a strong correlation between chess and math scores, and a higher improvement in math in the experimental group compared with the control group. These results foster the hypothesis that even a short-time practice of chess in children can be a useful tool to enhance their mathematical abilities.” (Sala et al., 2015, Abstract).

This is all nice and well, were it not for the fact that their experimental group had significantly more pupils that already knew how to play chess (193 out of 309, 62%) compared to the control group (72 out of 251, 29%). To give credit to the authors, they acknowledge this limitation of the study, but, surprisingly, they do not run their stats without the “I-already-know-chess” subjects….

Nevertheless, even if the robustness and the arguments are a little on the shoddy side, the paper points to a possible fruitful line of research: that of additional tools to improve school performance by incorporating game and playtime into the instructors’ and parents’ teaching arsenal.

Reference: Sala G, Gorini A, & Pravettoni G (23 July 2015). Mathematical Problem-Solving Abilities and Chess. An Experimental Study on Young Pupils. SAGE Open, 1-9. DOI: 10.1177/2158244015596050. Article | FREE FULLTEXT PDF

By Neuronicus, 30 January 2016

I am blind, but my other personality can see

58depression-388872_960_720

This is a truly bizarre report.

A woman named BT suffered an accident when she was 20 years old and she became blind. Thirteen year later she was referred to Bruno Waldvogel (one of the two authors of the paper) for psychotherapy by a psychiatry clinic who diagnosed her with dissociative identity disorder, formerly known as multiple personality disorder.

The cortical blindness diagnosis has been established after extensive ophtalmologic tests in which she appeared blind but not because of damage to the eyes. So, by inference, it had to be damage to the brain. Remarkably (we shall see later why), she had no oculomotor reflexes in response to glare. Moreover, visual evoked potentials (VEP is an EEG in the occipital region) showed no activity in the primary visual area of the brain (V1).

During the four years of psychotherapy, BT showed more than 10 distinct personalities. One of them, a teenage male, started to see words on a magazine and pretty soon could see everything. With the help of hypnotherapeutic techniques, more and more personalities started to see.

“Sighted and blind states could alternate within seconds” (Strasburger & Waldvogel, 2015).

The VEP showed no or very little activity when the blind personality was “on” and showed normal activity when the sighted personality was “on”. Which is extremely curious, because similar studies in people with psychogenic blindness or anesthetized showed intact VEPs.

There are a couple of conclusions from this: 1) BT was misdiagnosed, as is unlikely to be any brain damage because some personalities could see, and 2) Multiple personalities – or dissociate identities, as they are now called – are real in the sense that they can be separated at the biological level.

BEAR_10_04
The visual pathway that mediates conscious visual perception. a) A side view of the human brain with the retinogeniculocortical pathway shown inside (blue). b) A horizontal section through the brain exposing the same pathway.

Fascinating! The next question is, obviously, what’s the mechanism behind this? The authors say that it’s very likely the LGN (the lateral geniculate nucleus of the thalamus) which is the only relay between retina and V1 (see pic). It can be. Surely is possible. Unfortunately, so are other putative mechanisms, as 10% of the neurons in the retina also go to the superior colliculus, and some others go directly to the hypothalamus, completely bypassing the thalamus. Also, because it is impossible to have a precise timing on the switching between personalities, even if you MRI the woman it would be difficult to establish if the switching to blindness mode is the result of a bottom-up or a top-down modulation (i.e. the visual information never reaches V1, it reaches V1 and is suppressed there, or some signal form other brain areas inhibits V1 completely, so is unresponsive when the visual information arrives).

Despite the limitations, I would certainly try to get the woman into an fMRI. C’mon, people, this is an extraordinary subject and if she gave permission for the case study report, surely she would not object to the scanning.

Reference: Strasburger H & Waldvogel B (Epub 15 Oct 2015). Sight and blindness in the same person: Gating in the visual system. PsyCh Journal. doi: 10.1002/pchj.109.  Article | FULLTEXT PDF | Washington Post cover

By Neuronicus, 29 November 2015

Is religion turning perfectly normal children into selfish, punitive misanthropes? Seems like it.

Screenshot from
Screenshot from “Children of the Corn” (Director: Fritz Kiersch, 1984)

The main argument that religious people have against atheism or agnosticism is that without a guiding deity and a set of behaving rules, how can one trust a non-religious person to behave morally? In other words, there is no incentive for the non-religious to behave in a societally accepted manner. Or so it seemed. Past tense. There has been some evidence showing that, contrary to expectations, non-religious people are less prone to violence and deliver more lenient punishments as compared to religious people. Also, the non-religious show equal charitable behaviors as the religious folks, despite self-reporting of the latter to participate in more charitable acts. But these studies were done with adults, usually with non-ecological tests. Now, a truly first-of-its-kind study finds something even more interesting, that calls into question the fundamental basis of Christianity’s and Islam’s moral justifications.

Decety et al. (2015) administered a test of altruism and a test of moral sensitivity to 1170 children, aged 5-12, from the USA, Canada, Jordan, Turkey, and South Africa. Based on parents’ reports about their household practices, the children had been divided into 280 Christian, 510 Muslim, and 323 Not Religious (the remaining 57 children belonged to other religions, but were not included in the analyses due to lack of statistical power). The altruism test consisted in letting children choose their favorite 10 out of 30 stickers to be theirs to keep, but because there aren’t enough stickers for everybody, the child could give some of her/his stickers to another child, not so fortunate as to play the sticker game (the researcher would give the child privacy while choosing). Altruism was calculated as the number of stickers given to the fictive child. In the moral sensitivity task, children watched 10 videos of a child pushing, shoving etc. another child, either intentionally or accidentally and then the children were asked to rate the meanness of the action and to judge the amount of punishment deserved for each action.

And.. the highlighted results are:

  1. “Family religious identification decreases children’s altruistic behaviors.
  2. Religiousness predicts parent-reported child sensitivity to injustices and empathy.
  3. Children from religious households are harsher in their punitive tendencies.”
Current Biology DOI: (10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd
From Current Biology (DOI: 10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd. NOTE: ns. means non-significant difference.

Parents’ educational level did not predict children’s behavior, but the level of religiosity did: the more religious the household, the less altruistic, more judgmental, and delivering harsher punishments the children were. Also, in stark contrast with the actual results, the religious parents viewed their children as more emphatic and sensitive to injustices as compared to the non-religious parents. This was a linear relationship: the more religious the parents, the higher the self-reports of socially desirable behavior, but the lower the child’s empathy and altruism objective scores.

Childhood is an extraordinarily sensitive period for learning desirable social behavior. So… is religion really turning perfectly normal children into selfish, vengeful misanthropes? What anybody does at home is their business, but maybe we could make a secular schooling paradigm mandatory to level the field (i.e. forbid religion teachings in school)? I’d love to read your comments on this.

Reference: Decety J, Cowell JM, Lee K, Mahasneh R, Malcolm-Smith S, Selcuk B, & Zhou X. (16 Nov 2015, Epub 5 Nov 2015). The Negative Association between Religiousness and Children’s Altruism across the World. Current Biology. DOI: 10.1016/j.cub.2015.09.056. Article | FREE PDF | Science Cover

By Neuronicus, 5 November 2015

Only the climate change scientists are interested in evidence. The rest is politics

Satellite image of clouds created by the exhaust of ship smokestacks (2005). Credit: NASA. License: PD.
Satellite image of clouds created by the exhaust of ship smokestacks (2005). Credit: NASA. License: PD.

Medimorec & Pennycook (2015) analyzed the language used in two prominent reports regarding climate change. Climate change is not a subject of scientific debate anymore, but of political discourse. Nevertheless, it appears that there are a few scientists that are skeptical about the climate change. As part of a conservative think tank, they formed the “Nongovernmental International Panel on Climate Change (NIPCC) as an alternative to the Intergovernmental Panel on Climate Change (IPCC). In 2013, the NIPCC authored Climate Change Reconsidered II: Physical Science (hereafter referred to as ‘NIPCC’; Idso et al. 2013), a scientific report that is a direct response to IPCC’s Working Group 1: The Physical Science Basis (hereafter referred to as ‘IPCC’; Stocker et al. 2013), also published in 2013″ (Medimorec & Pennycook, 2015) .

The authors are not climate scientists, but psychologists armed with nothing but 3 text analysis tools: Coh-Metrix text analyzer, Linguistic Inquiry and Word Count, and AntConc 3.3.5 concordancer analysis toolkit). They do not even fully understand the two very lengthy and highly technical papers; as they put it,

it is very unlikely that non-experts (present authors included) would have the requisite knowledge to be able to distinguish the NIPCC and IPCC reports based on the validity of their scientific arguments“.

So, they proceed on counting nouns, verbs, adverbs, and the like. The results: IPCC used more formal language, more nouns, more abstract words, more infrequent words, more complex syntax, and a lot more tentative language (‘possible’, ‘probable’, ‘might’) than the NIPCC. Which is ironic, since the climate scientists proponents are the ones accused of alarmism and trumpeting catastrophes. On the contrary, their language was much more refrained, perhaps out of fear of controversy, or just as likely, because they are scientists and very afraid to put their reputations at stake by risking type 1 errors.

In the authors’ words (I know, I am citing them 3 times in 4 paragraphs, but I really enjoyed their eloquence),

“the IPCC authors used more conservative (i.e., more cautious, less explicit) language to present their claims compared to the authors of the NIPCC report […]. The language style used by climate change skeptics suggests that the arguments put forth by these groups warrant skepticism in that they are relatively less focused upon the propagation of evidence and more intent on discrediting the opposing perspective”.

And this comes just from text analysis…

Reference: Medimorec, S. & Pennycook, G. (Epub 30 August 2015). The language of denial: text analysis reveals differences in language use between climate change proponents and skeptics. Climatic Change, doi:10.1007/s10584-015-1475-2. Article | Research Gate full text PDF

By Neuronicus, 4 November 2015

Are you in love with an animal?

Sugar Candy Hearts by Petr Kratochvil. License: PD
Sugar Candy Hearts by Petr Kratochvil taken from publicdomainpictures. License: PD

Ren et al. (2015) gave sweet drink (Fanta), sweet food (Oreos), salty–vinegar food (Lays chips) or water to 422 people and then asked them about their romantic relationship; or, if they didn’t have one, about a hypothetical relationship. For hitched people, the foods or drinks had no effect on the evaluation of their relationship. In contrast, the singles who received sweets were more eager to initiate a relationship with a potential partner and evaluated more favorably a hypothetical relationship (how do you do that? I mean, if it’s hypothetical… why wouldn’t you evaluate it favorably from your singleton perspective?) Anyway, the singles who got sweets tend see things a little more on the rosy side, as opposed to the taken ones.

The rationale for doing this experiment is that metaphors alter our perceptions (fair enough). Given that many terms of endearment include reference to the taste of sweet, like “Honey”, “Sugar” or “Sweetie”, maybe this is not accidental or just a metaphor and, if we manipulate the taste, we manipulate the perception. Wait, what? Now re-read the finding above.

The authors take their results as supporting the view that “metaphorical thinking is one fundamental way of perceiving the world; metaphors facilitate social cognition by applying concrete concepts (e.g., sweet taste) to understand abstract concepts (e.g., love)” (p. 916).

So… I am left with many questions, the first being: if the sweet appelatives in a romantic relationship stem from an extrapolation of the concrete taste of sweet to an abstract concept like love, then, I wonder, what kind of concrete concept is being underlined in the prevalence of “baby” as a term of endearment? Do I dare speculate what the metaphor stands for? Should people who are referred to as “baby” by their partners alert the authorities for a possible pedophile ideation? And what do we do about the non-English cultures (apparently non-Germanic or non-Mandarin too) in which the lovey-dovey terms tend to cluster around various small objects (e.g. tassels), vegetables (e.g. pumpkin), cute onomatopoeics (I am at a loss for transcription here), or baby animals (e.g. chick, kitten, puppy). Believe me, such cultures do exist and are numerous. “Excuse me, officer, I suspect my partner is in love with an animal. Oh, wait, that didn’t come out right…”

Ok, maybe I missed something with this paper, as half-way through I failed to maintain proper focus due to an intruding – and disturbing! – image of a man, a chicken, and a tassel. So take the authors’ words when they say that their study “not only contributes to the literature on metaphorical thinking but also sheds light on an understudied factor that influences relationship initiation, that of taste” (p. 918). Oh, metaphors, how sweetly misleading you are…

Please use the “Comments” section below to share the strangest metaphor used as term of endearment you have ever heard in a romantic relationship.

Reference: Ren D, Tan K, Arriaga XB, & Chan KQ (Nov 2015). Sweet love: The effects of sweet taste experience on romantic perceptions. Journal of Social and Personal Relationships, 32(7): 905 – 921. DOI: 10.1177/0265407514554512. Article | FREE FULLTEXT PDF

By Neuronicus, 21 October 2015