Chloroquine-induced psychosis

In the past few days, a new hot subject has gripped the attention of various media and concerned the medical doctors, as if they don’t have enough to deal with: chloroquine. That is because the President of the U.S.A., Donald Trump, endorsed chloroquine as treatment of COVID-19, a “game changer”, despite his very own director of the National Institute of Allergy and Infectious Diseases (NIAID), Dr. Anthony Fauci, very emphatically and vehemently denying that the promise of (hydroxy)chloroquine is beyond anecdotal (see the White House briefing transcript here).

161 - Copy

Many medical doctors spoke out urging caution against the drug, particularly against the combination the President endorses: hydroxychloroquine + azithromycin. As I understand it, this combo can be lethal as it can lead to fatal arrhythmia.

As for the (hydroxy)cloroquine’s possibility to help treat COVID-19, the jury is still out. Far out. Meaning that there have been a few interesting observations of the drugs working in a Petri dish (Liu et al. 2020, Wang et al., 2020), but as any pharma company knows, there is a long and perilous way from Petri dishes to pharmacies. To be precise, only 1 in 5000 drugs get from pre-clinical trials to approval and it takes about 12 years for this process to be completed (Kaljevic et al., 2004). The time is so long not because red tape, as some would deplore, but because it takes time to see what it does in humans (Phase 0), what doses are safe and don’t kill you (Phase 1), does it work at all for the intended disease (Phase 2), compare it with other drugs and evaluate the long-term side effects (Phase 3) and, finally, to see the risks and benefits of this drug (Phase 4). While we could probably get rid of Phase 0 and 4 when there is such a pandemic, there is no way I would submit my family to anything that hasn’t passed phases 1, 2, and 3. And those take years. With all the money that a nation-state has, it would still take 18 months to do it semi-properly.

Luckily for all of us, chloroquine is a very old and established anti-malarial medicine, and as such we can safely dispense of Phases 0, 1, and 4, which is fine. So we can start Phase 2 with (hydroxy)chloroquine. And that is exactly what WHO and several others are doing right now. But we don’t have enough data. We haven’t done it yet. So one can hope as much as they want, but that doesn’t make it faster.

Unfortunately – and here we go to the crux of the post -, following the President’s endorsement, many started to hoard chloroquine. Particularly the rich who can afford to “convince” an MD to write them a script for it. In countries where chloroquine is sold without prescription, like Nigeria, where it is used for arthritis, people rushed to clear the pharmacies and some didn’t just stockpiled it, but they took it without reason and without knowing the dosage. And they died. [EDIT, 23 March 2020. If you think that wouldn’t ever happen in the land of the brave, think again, as the first death to irresponsible taking chloroquine just happened in the USA]. In addition, the chloroquine hoarding in US by those who can afford it (is about $200 for 50 pills) lead to lack of supply for those who really need it, like lupus or rheumatology patients.

For those who blindly hoard or take chloroquine without prescription, I have a little morsel of knowledge to impart. Remember I am not an MD; I hold a PhD in neuroscience. So I’ll tell you what my field knows about chloroquine.

Both chloroquine and hydroxychloroquine can cause severe psychosis.

That’s right. More than 7.1 % of people who took chloroquine as prophylaxis or for treatment of malaria developed “mental and neurological manifestations” (Bitta et al.,  2017). “Hydroxychloroquine was associated with the highest prevalence of mental neurological manifestations” (p. 12). The phenomenon is well-reported, actually having its own syndrome name: “chloroquine-induced psychosis”. It was observed more than 50 years ago, in 1962 (Mustakallio et al., 1962). The mechanisms are unclear, with several hypotheses being put forward, like the drugs disrupting the NMDA transmission, calcium homeostasis, vacuole exocytosis or some other mysterious immune or transport-related mechanism. Because the symptoms are so acute, so persistent and so diverse than more than one brain neurotransmitter system must be affected.

Chloroquine-induced psychosis has sudden onset, within 1-2 days of ingestion. The syndrome presents with paranoid ideation, persecutory delusions, hallucinations, fear, confusion, delirium, altered mood, personality changes, irritability, insomnia, suicidal ideation, and violence (Biswas et al., 2014, Mascolo et al., 2018). All these at moderately low or therapeutically recommended doses (Good et al., 1982). One or two pills can be lethal in toddlers (Smith & Klein-Schwartz, 2005). The symptoms persist long after the drug ingestion has stopped (Maxwell et al., 2015).

Still want to take it “just in case”?

162 chloro - Copy

P.S. A clarification: the chemical difference between hydroxychloroquine and chloroquine is only one hydroxyl group (OH). Both are antimalarial and both have been tested in vitro for COVID-19. There are slight differences between them in terms of toxicity, safety and even mechanisms, but for the intents of this post I have treated them as one drug, since both produce psychosis.

REFERENCES:

1) Biswas PS, Sen D, & Majumdar R. (2014, Epub 28 Nov 2013). Psychosis following chloroquine ingestion: a 10-year comparative study from a malaria-hyperendemic district of India. General Hospital Psychiatry, 36(2): 181–186. doi: 10.1016/j.genhosppsych.2013.07.012, PMID: 24290896 ARTICLE

2) Bitta MA, Kariuki SM, Mwita C, Gwer S, Mwai L, & Newton CRJC (2 Jun 2017). Antimalarial drugs and the prevalence of mental and neurological manifestations: A systematic review and meta-analysis. Version 2. Wellcome Open Research, 2(13): 1-20. PMCID: PMC5473418, PMID: 28630942, doi: 10.12688/wellcomeopenres.10658.2 ARTICLE|FREE FULLTEXT PDF

4) Good MI & Shader RI. Lethality and behavioral side effects of chloroquine (1982). Journal of Clinical Psychopharmacology, 2(1): 40–47. doi: 10.1097/00004714-198202000-00005, PMID: 7040501. ARTICLE

3) Kraljevic S, Stambrook PJ, & Pavelic K (Sep 2004). Accelerating drug discovery. EMBO Reports, 5(9): 837–842. doi: 10.1038/sj.embor.7400236, PMID: 15470377, PMCID: PMC1299137. ARTICLE| FREE FULLTEXT PDF

4) Mascolo A, Berrino PM, Gareri P, Castagna A, Capuano A, Manzo C, & Berrino L. (Oct 2018, Epub 9 Jun 2018). Neuropsychiatric clinical manifestations in elderly patients treated with hydroxychloroquine: a review article. Inflammopharmacology, 26(5): 1141-1149. doi: 10.1007/s10787-018-0498-5, PMID: 29948492. ARTICLE

5) Maxwell NM, Nevin RL, Stahl S, Block J, Shugarts S, Wu AH, Dominy S, Solano-Blanco MA, Kappelman-Culver S, Lee-Messer C, Maldonado J, & Maxwell AJ (Jun 2015, Epub 9 Apr 2015). Prolonged neuropsychiatric effects following management of chloroquine intoxication with psychotropic polypharmacy. Clinical Case Reports, 3(6): 379-87. doi: 10.1002/ccr3.238, PMID: 26185633. ARTICLE | FREE FULLTEXT PDF

6) Mustakallio KK, Putkonen T, & Pihkanen TA (1962 Dec 29). Chloroquine psychosis? Lancet, 2(7270): 1387-1388. doi: 10.1016/s0140-6736(62)91067-x, PMID: 13936884. ARTICLE

7) Smith ER & Klein-Schwartz WJ (May 2005). Are 1-2 dangerous? Chloroquine and hydroxychloroquine exposure in toddlers. The Journal of Emergency Medicine, 28(4): 437-443. doi: 10.1016/j.jemermed.2004.12.011, PMID: 15837026. ARTICLE

Studies about chloroquine and hydoxychloroquine on SARS-Cov2 in vitro:

  • Gautret P, Lagier J-C, Parola P, Hoang VT, Meddeb L, Mailhe M, Doudier B, Courjon J, Giordanengo V, Esteves Vieira V, Tissot Dupont H,Colson SEP, Chabriere E, La Scola B, Rolain J-M, Brouqui P,  Raoult D. (20 March 2020). Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. International Journal of Antimicrobial Agents, PII:S0924-8579(20)30099-6, https://doi.org/10.1016/j.ijantimicag.2020.105949. ARTICLE | FREE FULLTEXT PDF

These studies are also not peer reviewed or at the very least not properly peer reviewed. I say that so as to take them with a grain of salt. Not to criticize in the slightest. Because I do commend the speed with which these were done and published given the pandemic. Bravo to all the authors involved (except maybe the last one f it proves to be fraudulent). And also a thumbs up to the journals which made the data freely available in record time. Unfortunately, from these papers to a treatment we still have a long way to go.

By Neuronicus, 22 March 2020

Education raises intelligence

Intelligence is a dubious concept in psychology and biology because it is difficult to define. In any science, something has a workable definition when it is described by unique testable operations or observations. But “intelligence” had eluded that workable definition, having gone through multiple transformations in the past hundred years or so, perhaps more than any other psychological construct (except “mind”). Despite Binet’s first claim more than a century ago that there is such a thing as IQ and he has a way to test for it, many psychologists and, to a lesser extent, neuroscientists are still trying to figure out what it is. Neuroscientists to a lesser extent because once the field as a whole could not agree upon a good definition, it moved on to the things that they can agree upon, i.e. executive functions.

Of course, I generalize trends to entire disciplines and I shouldn’t; not all psychology has a problem with operationalizations and replicability, just as not all neuroscientists are paragons of clarity and good science. In fact, the intelligence research seems to be rather vibrant, judging by the publications number. Who knows, maybe the psychologists have reached a consensus about what the thing is. I haven’t truly kept up with the IQ research, partly because I think the tests used for assessing it are flawed (therefore you don’t know what exactly you are measuring) and tailored for a small segment of the population (Western society, culturally embedded, English language conceptualizations etc.) and partly because the circularity of definitions (e.g. How do I know you are highly intelligent? You scored well at IQ tests. What is IQ? What the IQ tests measure).

But the final nail in the coffin of intelligence research for me was a very popular definition of Legg & Hutter in 2007: intelligence is “the ability to achieve goals”. So the poor, sick, and unlucky are just dumb? I find this definition incredibly insulting to the sheer diversity within the human species. Also, this definition is blatantly discriminatory, particularly towards the poor, whose lack of options, access to good education or to a plain healthy meal puts a serious brake on goal achievement. Alternately, there are people who want for nothing, having been born in opulence and fame but whose intellectual prowess seems to be lacking, to put it mildly, and owe their “goal achievement” to an accident of birth or circumstance. The fact that this definition is so accepted for human research soured me on the entire field. But I’m hopeful that the researchers will abandon this definition more suited for computer programs than for human beings; after all, paradigmatic shifts happen all the time.

In contrast, executive functions are more clearly defined. The one I like the most is that given by Banich (2009): “the set of abilities required to effortfully guide behavior toward a goal”. Not to achieve a goal, but to work toward a goal. With effort. Big difference.

So what are those abilities? As I said in the previous post, there are three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Now I might have left you with the impression that intelligence = executive functioning and that wouldn’t be true. There is a clear correspondence between executive functioning and intelligence, but it is not a perfect correspondence and many a paper (and a book or two) have been written to parse out what is which. For me, the most compelling argument that executive functions and whatever it is that the IQ tests measure are at least partly distinct is that brain lesions that affect one may not affect the other. It is beyond the scope of this blogpost to analyze the differences and similarities between intelligence and executive functions. But to clear up just a bit of the confusion I will say this broad statement: executive functions are the foundation of intelligence.

There is another qualm I have with the psychological research into intelligence: a big number of psychologists believe intelligence is a fixed value. In other words, you are born with a certain amount of it and that’s it. It may vary a bit, depending on your life experiences, either increasing or decreasing the IQ, but by and large you’re in the same ball-park number. In contrast, most neuroscientists believe all executive functions can be drastically improved with training. All of them.

After this much semi-coherent rambling, here is the actual crux of the post: intelligence can be trained too. Or I should say the IQ can be raised with training. Ritchie & Tucker-Drob (2018) performed a meta-analysis looking at over 600,000 healthy participants’ IQ and their education. They confirmed a previously known observation that people who score higher at IQ tests complete more years of education. But why? Is it because highly intelligent people like to learn or because longer education increases IQ? After carefully and statistically analyzing 42 studies on the subject, the authors conclude that the more educated you are, the more intelligent you become. How much more? About 1 to 5 IQ points per 1 additional year of education, to be precise. Moreover, this effect persists for a lifetime; the gain in intelligence does not diminish with the passage of time or after exiting school.

This is a good paper, its conclusions are statistically robust and consistent. Anybody can check it out as this article is an open access paper, meaning that not only the text but its entire raw data, methods, everything about it is free for everybody.

155 education and iq

For me, the conclusion is inescapable: if you think that we, as a society, or you, as an individual, would benefit from having more intelligent people around you, then you should support free access to good education. Not exactly where you thought I was going with this, eh ;)?

REFERENCE: Ritchie SJ & Tucker-Drob EM. (Aug, 2018, Epub 18 Jun 2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8):1358-1369. PMID: 29911926, PMCID: PMC6088505, DOI: 10.1177/0956797618774253. ARTICLE | FREE FULLTEXT PDF | SUPPLEMENTAL DATA  | Data, codebooks, scripts (Mplus and R), outputs

Nota bene: I’d been asked what that “1 additional year” of education means. Is it with every year of education you gain up to 5 IQ points? No, not quite. Assuming I started as normal IQ, then I’d be… 26 years of education (not counting postdoc) multiplied by let’s say 3 IQ points, makes me 178. Not bad, not bad at all. :))). No, what the authors mean is that they had access to, among other datasets, a huge cohort dataset from Norway from the moment when they increased the compulsory education by 2 years. So the researchers could look at the IQ tests of the people before and after the policy change, which were administered to all males at the same age when they entered compulsory military service. They saw the increase in 1 to 5 IQ points per each extra 1 year of education.

By Neuronicus, 14 July 2019

Gaming can improve cognitive flexibility

It occurred to me that my blog is becoming more sanctimonious than I’d like. I have many posts about stuff that’s bad for you: stress, high fructose corn syrup, snow, playing soccer, cats, pesticides, religion, climate change, even licorice. So I thought to balance it a bit with stuff that is good for you. To wit, computer games; albeit not all, of course.

An avid gamer myself, those who know me would hardly be surprised that I found a paper cheering StarCraft. A bit of an old game, but still a solid representative of the real-time strategy (RTS) genre.

About a decade ago, a series of papers emerged which showed that first-person shooters and action games in general improve various aspects of perceptual processing. It makes sense because in these games split second decisions and actions make the difference between win or lose, so the games act as training experience for increased sensitivity to cues that facilitate said decisions. But what about games where the overall strategy and micromanagement skills are a bit more important than the perceptual skills, a.k.a. RTS? Would these games improve the processes underlying strategical thinking in a changing environment?

Glass, Maddox, & Love (2013) sought to answer this question by asking a few dozen undergraduates with little gaming experience to play a slightly modified StarCraft game for 40 hours (1 hour per day). “StarCraft (published by Blizzard Entertainment, Inc. in 1998) (…) involves the creation, organization, and command of an army against an enemy army in a real-time map-based setting (…) while managing funds, resources, and information regarding the opponent ” (p. 2). The participants were all female because they couldn’t find enough male undergraduates that played computer games less than 2 hours per day. The control group had to play The Sims 2 for the same amount of time, a game where “participants controlled and developed a single ‘‘family household’’ in a virtual neighborhood” (p.3.). The researchers cleverly modified the StarCraft game in such a way that they replaced a perceptual component with a memory component (disabled some maps) and created two versions: one more complex (full-map, two friendly, two enemy bases) and one less so (half-map, one friendly, one enemy bases). The difficulty for all games was set at a win rate of 50%.

Before and after the game-playing, the subjects were asked to complete a huge battery of tests destined to test their memory and various other cognitive processes. By carefully parsing these out, the authors conclude that “forty hours of training within an RTS game that stresses rapid and simultaneous maintenance, assessment, and coordination between multiple information and action sources was sufficient” to improve cognitive flexibility. Moreover, authors point out that playing on a full-map with multiple allies and enemies is conducive to such improvement, whereas playing a less cognitive resources demanding game, despite similar difficulty levels, was not. Basically, the more stuff you have to juggle, the better your flexibility will be. Makes sense.

My favorite take from this paper though is not only that StarCraft is awesome, obviously, but that “cognitive flexibility is a trainable skill” (p. 5). Let me tell you why that is so grand.

Cognitive flexibility is an important concept in the neuroscience of executive functioning. The same year that this paper was published, Diamond was publishing an excellent review paper in which she neatly identified three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Unlike some old views on the immutability of the inborn IQ, each one of the core and higher-order executive functions can be improved upon with training at any point in life and can suffer if something is not right in your life (stress, loneliness, sleep-deprived or sick). This paper adds to the growing body of evidence showing that executive functions can be trainable. Intelligence, however you want to define it, relies upon executive functions, at least some of them, and perhaps boosting cognitive flexibility might result in a slight increase in the IQ, methinks.

Bottom line: real-time strategy games with huge maps and tons of stuff to do are good for you. Here you go.

154 starcraft - Copy
The StarCraft images, both foreground and background, are copyrighted to © 1998 Blizzard Entertainment.

REFERENCES:

  1. Glass BD, Maddox WT, Love BC. (7 Aug 2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS One, 2;8(8):e70350. eCollection 2013. PMID: 23950921, PMCID: PMC3737212, DOI: 10.1371/journal.pone.0070350. ARTICLE | FREE FULLTEXT PDF
  2. Diamond A (2013, Epub 27 Sept. 2012). Executive Functions. 64:135-68. PMID: 23020641, PMCID: PMC4084861, DOI: 10.1146/annurev-psych-113011-143750. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 15 June 2019

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCES:

1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion, another cognitive bias (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is that MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others, a.k.a. the ‘above-average effect’.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps of the scanner. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data (meaning the answers to the questionnaires) showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding to its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram: Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except for a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is correct only in certain circumstances. Because only in particular conditions this illusion of control applies and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials who, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it ain’t that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Aging and its 11 hippocampal genes

Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.

First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.

Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.

After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.

But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.

112hc-copy

These are the 11 genes: C3 (up), Cd74  (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.

I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.

Reference: Pardo J, Abba MC, Lacunza E, Francelle L, Morel GR, Outeiro TF, Goya RG. (13 Jan 2017, Epub ahead of print). Identification of a conserved gene signature associated with an exacerbated inflammatory environment in the hippocampus of aging rats. Hippocampus, doi: 10.1002/hipo.22703. ARTICLE

By Neuronicus, 25 January 2017

Save

Save

Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.

104-copy

unless you’re left-handed, then things are reversed.

References:

1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016

Save

Transcranial direct current stimulation & cognitive enhancement

There’s so much research out there… So much that some time ago I learned that in science, as probably in other fields too, one has only to choose a side of an argument and then, provided that s/he has some good academic search engines skills and institutional access to journals, get the articles that support that side. Granted, that works for relatively small questions restricted to narrow domains, like “is that brain structure involved in x” or something like that; I doubt you would be able to find any paper that invalidates theories like gravity or central dogma of molecular biology (DNA to RNA to protein).

If you’re a scientist trying to answer a question, you’ll probably comb through some dozens papers and form an opinion of your own after weeding out the papers with small sample sizes, the ones with shoddy methodology or simply the bad ones (yes, they do exists, even scientists are people and hence prone to mistakes). And if you’re not a scientist or the question you’re trying to find an answer for is not from your field, then you’ll probably go for reviews or meta-analyses.

Meta-analyses are studies that look at several papers (dozens or hundreds), pool their data together and then apply some complicated statistics to see the overall results. One such meta-analysis concerns the benefits, if any, of transcranial direct current stimulation (tDCS) on working memory (WM) in healthy people.

tDCS is a method of applying electrical current through some electrodes to your neurons to change how they work and thus changing some brain functions. It is similar with repetitive transcranial magnetic stimulation (rTMs), only in the latter case the change in neuronal activity is due to the application of a magnetic field.

Some people look at these methods not only as possible treatment for a variety of disorders, but also as cognitive enhancement tools. And not only by researchers, but also by various companies who sell the relatively inexpensive equipment to gamers and others. But does tDCS work in the first place?

92 conf - Copy (2)

Mancuso et al. (2016) say that there have been 3 recent meta-analyses done on this issue and they found that “the effects [of tDCS on working memory in healthy volunteers] are reliable though small (Hill et al., 2016), partial (Brunoni & Vanderhasselt, 2014), or nonexistent (Horvath et al., 2015)” (p. 2). But they say these studies are somewhat flawed and that’s why they conducted their own meta-analysis, which concludes that “the true enhancement potential of tDCS for WM remains somewhat uncertain” (p.19). Maybe it works a little bit if used during the training phase of a working memory task, like n-back, and even then that’s a maybe…

Boring, you may say. I’ll grant you that. So… all that work and it revealed virtually nothing new! I’ll grant you that too. But what this meta-analysis brings new, besides adding some interesting statistics, like controlling for publication bias, is a nice discussion as to why they didn’t find nothing much, exploring possible causes, like the small sample and effects sizes, which seem to plague many behavioral studies. Another explanation which, to tell you the truth, the authors do not seem to be too enamored with is that, maybe, just maybe, simply, tDCS doesn’t have any effect on working memory, period.

Besides, papers with seemingly boring findings do not catch the media eye, so I had to give it a little attention, didn’t I 😉 ?

Reference: Mancuso LE, Ilieva IP, Hamilton RH, & Farah MJ. (Epub 7 Apr 2016, Aug 2016) Does Transcranial Direct Current Stimulation Improve Healthy Working Memory?: A Meta-analytic Review. Journal of Cognitive Neuroscience, 28(8):1063-89. PMID: 27054400, DOI: 10.1162/jocn_a_00956. ARTICLE

 By Neuronicus, 2 August 2016

Mu suppression and the mirror neurons

A few decades ago, Italian researchers from the University of Parma discovered some neurons in monkey which were active not only when the monkey is performing an action, but also when watching the same action performed by someone else. This kind of neuron, or rather this particular neuronal behavior, had been subsequently identified in humans scattered mainly within the frontal and parietal cortices (front and top of your head) and called the mirror neuron system (MNS). Its role is to understand the intentions of others and thus facilitate learning. Mind you, there are, as it should be in any healthy vigorous scientific endeavor, those who challenge this role and even the existence of MNS.

Hobson & Bishop (2016) do not question the existence of the mirror neurons or their roles, but something else. You see, proper understanding of intentions, actions and emotions of others is severely impaired in autism or some schizophrenias. Correspondingly, there have been reports saying that the MNS function is abnormal in these disorders. So if we can manipulate the neurons that help us understanding others, then we may be able to study the neurons better, and – who knows? – maybe even ‘switch them on’ and ‘off’ when needed (Ha! That’s a scary thought!).

EEG WIKI
Human EEG waves (from Wikipedia, under CC BY-SA 3.0 license)

Anyway, previous work said that recording a weak Mu frequency in the brain regions with mirror neurons show that these neurons are active. This frequency (between 8-13 Hz) is recorded through electroencephalography (EEG). The assumption is as follows: when resting, neurons fire synchronously; when busy, they fire each to its own, so they desynchronize, which leads to a reduction in the Mu intensity.

All well and good, but there is a problem. There is another frequency that overlaps with the Mu frequency and that is the Alpha band. Alpha activity is highest when a person is awake with eyes closed, but diminishes when the person is drowsy or, importantly, when making a mental effort, like paying great attention to something. So, if I see a weak Mu/Alpha frequency when the subject is watching someone grabbing a pencil, is that because the mirror neurons are active or because he’s sleepy? There are a few gimmicks to disentangle between the two, from the setup of the experiment in such a way that it requires same attention demand over tasks to the careful localization of the origin of the two waves (Mu is said to arise from sensoriomotor regions, whereas Alpha comes from more posterior regions).

But Hobson & Bishop (2016) argue that this disentangling is more difficult than previously thought by carrying out a series of experiments where they varied the baseline, in such a way that some were more attentionally demanding than others. After carefully analyzing various EEG waves and electrodes positions in these conditions, they conclude that “mu suppression can be used to index the human MNS, but the effect is weak and unreliable and easily confounded with alpha suppression“.

87mu - Copy

What makes this paper interesting to me, besides its empirical findings, is the way the experiment was conducted and published. This is a true hypothesis driven study, following the scientific method step by step, a credit to us all scientists. In other words, a rare gem.  A lot of other papers are trying to make a pretty story from crappy data or weave some story about the results as if that’s what they went for all along when in fact they did a bunch of stuff and chose what looked good on paper.

Let me explain. As a consequence of the incredible pressure put on researchers to publish or perish (which, believe me, is more than just a metaphor, your livelihood and career depend on it), there is an alarming increase in bad papers, which means

  • papers with inappropriate statistical analyses (p threshold curse, lack of multiple comparisons corrections, like the one brilliantly exposed here),
  • papers with huge databases in which some correlations are bound to appear by chance alone and are presented as meaningful (p-hacking or data fishing),
  • papers without enough data to make a meaningful conclusion (lack of statistical power),
  • papers that report only good-looking results (only positive results required by journals),
  • papers that seek only to provide data to reinforce previously held beliefs (confirmation bias)
  • and so on.

For these reasons (and more), there is a high rate of rejection of papers submitted to journals (about 90%), which means more than just a lack of publication in a good journal; it means wasted time, money and resources, shattered career prospects for the grad students who did the experiments and threatened job security for everybody involved, not to mention a promotion of distrust of science and a disservice to the scientific endeavor in general. So some journals, like Cortex, are moving toward a system called Registered Report, which asks for the rationale and the plan of the experiment before this is conducted, which should protect against many of the above-mentioned plagues. If the plan is approved, the chances to get the results published in that journal are 90%.

This is one of those Registered Report papers. Good for you, Hobson & Bishop!

REFERENCE: Hobson HM & Bishop DVM (Epub April 2016). Mu suppression – A good measure of the human mirror neuron system?. Cortex, doi: 10.1016/j.cortex.2016.03.019 ARTICLE | FREE FULLTEXT PDF | RAW DATA

By Neuronicus, 14 July 2016

Not all children diagnosed with ADHD have attention deficits

Given the alarming increase in the diagnosis of attention deficit/hyperactivity disorder (ADHD) over the last 20 years, I thought pertinent to feature today an older paper, from the year 2000.

Dopamine, one of the chemicals that the neurons use to communicate, has been heavily implicated in ADHD. So heavily in fact that Ritalin, the main drug used for the treatment of ADHD, has its main effects by boosting the amount of dopamine in the brain.

Swanson et al. (2000) reasoned that people with a particular genetic abnormality that makes their dopamine receptors work less optimally may have more chances to have ADHD. The specialist reader may want to know that the genetic abnormality in question refers to a 7-repeat allele of a 48-bp variable number of tandem repeats in exon 3 of the dopamine receptor number 4 located on chromosome 11, whose expression results in a weaker dopamine receptor. We’ll call it DRD4,7-present as opposed to DRD4,7-absent (i.e. people without this genetic abnormality).

They had access to 96 children diagnosed with ADHD after the diagnostic criteria of DSM-IV and 48 matched controls (children of the same gender, age, school affiliation, socio-economic status etc. but without ADHD). About half of the children diagnosed with ADHD had the DRD4,7-present.

The authors tested the children on 3 tasks:

(i) a color-word task to probe the executive function network linked to anterior cingulate brain regions and to conflict resolution;
(ii) a cued-detection task to probe the orienting and alerting networks linked to posterior parietal and frontal brain regions and to shifting and maintenance of attention; and
(iii) a go-change task to probe the alerting network (and the ability to initiate a series of rapid response in a choice reaction time task), as well as the executive network (and the ability to inhibit a response and re-engage to make another response) (p. 4756).

Invalidating the authors’ hypothesis, the results showed that the controls and the DRD4,7-present had similar performance at these tasks, in contrast to the DRD4,7-absent who showed “clear abnormalities in performance on these neuropsychological tests of attention” (p. 4757).

This means two things:
1) Half of the children diagnosed with ADHD did not have an attention deficit.
2) These same children had the DRD4,7-present genetic abnormality, which has been previously linked with novelty seeking and risky behaviors. So it may be just possible that these children do not suffer from ADHD, but “may be easily bored in the absence of highly stimulating conditions, may show delay aversion and choose to avoid waiting, may have a style difference that is adaptive in some situations, and may benefit from high activity levels during childhood” (p. 4758).

Great paper and highly influential. The last author of the article (meaning the chief of the laboratory) is none other that Michael I. Posner, whose attentional networks, models, and tests feature every psychology and neuroscience textbook. If he doesn’t know about attention, then I don’t know who is.

One of the reasons I chose this paper is because it seems to me that a lot of teachers, nurses, social workers, or even pediatricians feel qualified to scare the living life out of parents by suggesting that their unruly child may have ADHD. In deference to most form the above-mentioned professions, the majority of people recognize their limits and tell the concerned parents to have the child tested by a qualified psychologist. And, unfortunately, even that may result in dosing your child with Ritalin needlessly when the child’s propensity toward a sensation-seeking temperament and extravert personality, may instead require a different approach to learning with a higher level of stimulation (after all, the children form the above study had been diagnosed by qualified people using their latest diagnosis manual).

Bottom line: beware of any psychologist or psychiatrist who does not employ a battery of attention tests when diagnosing your child with ADHD.

93 adhd - Copy

Reference: Swanson J, Oosterlaan J, Murias M, Schuck S, Flodman P, Spence MA, Wasdell M, Ding Y, Chi HC, Smith M, Mann M, Carlson C, Kennedy JL, Sergeant JA, Leung P, Zhang YP, Sadeh A, Chen C, Whalen CK, Babb KA, Moyzis R, & Posner MI. (25 April 2000). Attention deficit/hyperactivity disorder children with a 7-repeat allele of the dopamine receptor D4 gene have extreme behavior but normal performance on critical neuropsychological tests of attention. Proceedings of the National Academy of Sciences of the United States of America, 97(9):4754-4759. doi: 10.1073/pnas.080070897. Article | FREE FULLTEXT PDF

P.S. If you think that “weeell, this research happened 16 years ago, surely something came out of it” then think again. The newer DSM-V’s criteria for diagnosis are likely to cause an increase in the prevalence of diagnosis of ADHD.

By Neuronicus, 26 February 2016

“Stop” in the brain

Neural correlates of stopping. License: PD
Neural correlates of stopping. License: PD

A lot of the neuroscience focuses on “what happens in the brain when/before/after subject does/thinks/feels x“. A lot, but not all. So what happens in the brain when subject is specifically told to NOT do/think/feel x.

Jha et al. (2015) used magnetoencephalography, a non-invasive method to record the electrical activity of neurons, to see what the brain does during several variants of the Stop-Signal Task. The task is very simple: a right- or left- pointing arrow appears on a screen which tells the subject to press a button correspondingly to his left or his right. In 50% of the trials, immediately after the arrow, a vertical red line appears which tells the subject to stop, i.e. don’t press the button. The variants that the authors developed allowed them to modulate the context of the stopping signal, as well as assess the duration of the stopping process.

The main findings of the paper are the involvement of the right inferior frontal gyrus in the duration of stopping (meaning the time it takes to execute the Stop process) and the pre-supplementary motor area in context manipulation (meaning the more complex the context, the more activity in this region). Curiously, the right, but not left inferior frontal gyrus activation was irrespective to the hand used for stopping.

Reference: Jha A, Nachev P, Barnes G, Husain M, Brown P, & Litvak V (Nov 2015, Epub 9 Mar 2015). The Frontal Control of Stopping. Cerebral Cortex, 25: 4392–4406. doi: 10.1093/cercor/bhv027. Article | FREE PDF

By Neuronicus, 2 November 2015

It’s what I like or what you like? I don’t know anymore…

The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another's through learning. Modified from Fig. 2B from Garvert et al. (2015)
The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another’s, through learning. Modified from Fig. 2B from Garvert et al. (2015), which is an open access article under the CC BY license.

One obvious consequence of being a social mammal is that each individual wants to be accepted. Nobody likes rejection, be it from a family member, a friend or colleague, a job application, or even a stranger. So we try to mould our beliefs and behaviors to fit the social norms, a process called social conformity. But how does that happen?

Garvert et al. (2015) shed some light on the mechanism(s) underlying the malleability of personal preferences in response to information about other people preferences. Twenty-seven people had 48 chances to make a choice on whether gain a small amount of money now or more money later, with “later” meaning from 1 day to 3 months later. Then the subjects were taught another partner choices, no strings attached, just so they know. Then they were made to chose again. Then they got into the fMRI and there things got complicated, as the subjects had to choose as they themselves would choose, as their partner would choose, or as an unknown person would choose. I skipped a few steps, the procedure is complicated and the paper is full of cumbersome verbiage (e.g. “We designed a contrast that measured the change in repetition suppression between self and novel other from block 1 to block 3, controlled for by the change in repetition suppression between self and familiar other over the same blocks” p. 422).

Anyway, long story short, the behavioral results showed that the subjects tended to alter their preferences to match their partner’s (although not told to do so, it had no impact on their own money gain, there were not time constraints, and sometimes were told that the “partner” was a computer).

These behavioral changes were matched by the changes in the activation pattern of the medial prefrontal cortex (mPFC), in the sense that learning of the preferences of another, which you can imagine as a specific neural pattern in your brain, changes the way your own preferences are encoded in the same neural pattern.

Reference: Garvert MM, Moutoussis M, Kurth-Nelson Z, Behrens TE, & Dolan RJ (21 January 2015). Learning-induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron, 85(2):418-28. doi: 10.1016/j.neuron.2014.12.033. Article + FREE PDF

By Neuronicus, 11 October 2015

The FIRSTS: brain active before conscious intent (1983)

Actor Jim Carry pretending to be attacked by its own hand in the movie Liar Liar. Dir. Tom Shadyac. Universal Pictures, 1997.
Actor Jim Carrey pretending to be attacked by its own hand in the movie Liar Liar. Dir. Tom Shadyac. Universal Pictures, 1997.

Free will. And with these two words I just opened a can of worms, didn’t I? Modern neuroscience poked its fingers at the eternal problem of whether humans have free will or not, usually with the help of the fMRI, and, more recently trying (and succeeding) to manipulate it with rTMS. But before these fancy techniques, there was the old-fashioned EEG.

In 1983, Libet et al. had 5 subjects sitting comfortably in a chair and watching a clock. Subjects were instructed to make a move of their right hand whenever they want AND to remember the position of the clock hand when they felt the urge to move. During the experiments, the subjects had electrodes on the scalp that measured their cortical activity and electrodes on their hand that measured muscle activity.

The brain activity began at least 1 second before the hand movement and Libet et al. called this activity the “readiness potential”. The muscle activity began 200 miliseconds before the person reported that s/he wanted to move their hand. In other words, brain tells the hand to move and very shortly after you are aware of the want to move your hand. “Brain activity therefore causes conscious intention rather than the other way around: there is no ‘ghost in the machine’.” (Haggard, 2008).

Reference: Libet B, Gleason CA, Wright EW, Pearl DK. (September 1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). The unconscious initiation of a freely voluntary act. Brain, 106 (Pt 3):623-42. DOI: dx.doi.org/10.1093/brain/106.3.623. Article

By Neuronicus, 10 October 2015