No Link Between Mass Shootings & Mental Illness

On Valentine’s Day another horrifying school mass shooting happened in USA, leaving 17 people dead. Just like after the other mass shootings, a lot of people – from media to bystanders, from gun lovers to gun critics, from parents to grandparents, from police to politicians – talk about the link between mental illness and mass shootings. As one with advanced degrees in both psychology and neuroscience, I am tired to explain over and over again that there is no significant link between the two! Mass shootings happen because an angry person has had enough sorrow, stress, rejection and/or disappointment and HAS ACCESS TO A MASS KILLING WEAPON. Yeah, I needed the caps. Sometimes scientists too need to shout to be heard.

So here is the abstract of a book chapter called straightforwardly “Mass Shootings and Mental Illness”. The entire text is available at the links in the reference below.

From Knoll & Annas (2015):

“Common Misperceptions

  • Mass shootings by people with serious mental illness represent the most significant relationship between gun violence and mental illness.
  • People with serious mental illness should be considered dangerous.
  • Gun laws focusing on people with mental illness or with a psychiatric diagnosis can effectively prevent mass shootings.
  • Gun laws focusing on people with mental illness or a psychiatric diagnosis are reasonable, even if they add to the stigma already associated with mental illness.

Evidence-Based Facts

  • Mass shootings by people with serious mental illness represent less than 1% of all yearly gun-related homicides. In contrast, deaths by suicide using firearms account for the majority of yearly gun-related deaths.
  • The overall contribution of people with serious mental illness to violent crimes is only about 3%. When these crimes are examined in detail, an even smaller percentage of them are found to involve firearms.
  • Laws intended to reduce gun violence that focus on a population representing less than 3% of all gun violence will be extremely low yield, ineffective, and wasteful of scarce resources. Perpetrators of mass shootings are unlikely to have a history of involuntary psychiatric hospitalization. Thus, databases intended to restrict access to guns and established by guns laws that broadly target people with mental illness will not capture this group of individuals.
  • Gun restriction laws focusing on people with mental illness perpetuate the myth that mental illness leads to violence, as well as the misperception that gun violence and mental illness are strongly linked. Stigma represents a major barrier to access and treatment of mental illness, which in turn increases the public health burden”.

REFERENCE: Knoll, James L. & Annas, George D. (2015). Mass Shootings and Mental Illness. In book: Gun Violence and Mental Illness, Edition: 1st, Chapter: 4, Publisher: American Psychiatric Publishing, Editors: Liza H. Gold, Robert I. Simon. ISBN-10: 1585624985, ISBN-13: 978-1585624980. FULLTEXT PDF via ResearchGate | FULLTEXT PDF via Psychiatry Online

The book chapter is not a peer-reviewed document, even if both authors are Professors of Psychiatry. To quiet putative voices raising concerns about that, here is a peer-reviewed paper with open access that says basically the same thing:

Swanson et al. (2015) looked at large scale (thousands to tens of thousands of individuals) data to see if there is any relationship between violence, gun violence, and mental illness. They concluded that “epidemiologic studies show that the large majority of people with serious mental illnesses are never violent. However, mental illness is strongly associated with increased risk of suicide, which accounts for over half of US firearms–related fatalities”. The last sentence is reminiscent of the finding that stricter gun control laws lower suicide rate.

REFERENCE: Swanson JW, McGinty EE, Fazel S, Mays VM (May 2015). Mental illness and reduction of gun violence and suicide: bringing epidemiologic research to policy. Annals of Epidemiology, 25(5): 366–376. doi: 10.1016/j.annepidem.2014.03.004, PMCID: PMC4211925. FULLTEXT | FULLTEXT PDF.

Further peer-reviewed bibliography (links to fulltext pdfs):

  1. Guns, anger, and mental disorders: Results from the National Comorbidity Survey Replication (NCS-R): “a large number of individuals in the United States have anger traits and also possess firearms at home (10.4%) or carry guns outside the home (1.6%).”
  2. News Media Framing of Serious Mental Illness and Gun Violence in the United States, 1997-2012: “most news coverage occurred in the wake of mass shootings, and “dangerous people” with serious mental illness were more likely than “dangerous weapons” to be mentioned as a cause of gun violence.”
  3. The Link Between Mental Illness and Firearm Violence: Implications for Social Policy and Clinical Practice: “Firearm violence is a significant and preventable public health crisis. Mental illness is a weak risk factor for violence despite popular misconceptions reflected in the media and policy”.
  4. Using Research Evidence to Reframe the Policy Debate Around Mental Illness and Guns: Process and Recommendations: “restricting firearm access on the basis of certain dangerous behaviors is supported by the evidence; restricting access on the basis of mental illness diagnoses is not”.
  5. Mental Illness, Mass Shootings, and the Politics of American Firearms: “notions of mental illness that emerge in relation to mass shootings frequently reflect larger cultural stereotypes and anxieties about matters such as race/ethnicity, social class, and politics. These issues become obscured when mass shootings come to stand in for all gun crime, and when “mentally ill” ceases to be a medical designation and becomes a sign of violent threat”.

131 gun - Copy

By Neuronicus, 25 February 2018

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).


Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCE: Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion, another cognitive bias (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is that MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others, a.k.a. the ‘above-average effect’.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps of the scanner. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data (meaning the answers to the questionnaires) showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding to its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram: Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except for a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is slightly incorrect. Because only in particular conditions this illusion of control applies, and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials who, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it ain’t that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on

Aging and its 11 hippocampal genes

Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.

First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.

Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.

After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.

But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.


These are the 11 genes: C3 (up), Cd74  (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.

I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.

Reference: Pardo J, Abba MC, Lacunza E, Francelle L, Morel GR, Outeiro TF, Goya RG. (13 Jan 2017, Epub ahead of print). Identification of a conserved gene signature associated with an exacerbated inflammatory environment in the hippocampus of aging rats. Hippocampus, doi: 10.1002/hipo.22703. ARTICLE

By Neuronicus, 25 January 2017



Soccer and brain jiggling

There is no news or surprise that strong hits to the head produce transient or permanent brain damage. But how about mild hits produced by light objects like, say, a volley ball or soccer ball?

During a game of soccer, a player is allowed to touch the ball with any part of his/her body minus the hands. Therefore, hitting the ball with the head, a.k.a. soccer heading, is a legal move and goals marked through such a move are thought to be most spectacular by the refined connoisseur.

A year back, in 2015, the United States Soccer Federation forbade the heading of the ball by children 10 years old and younger after a class-action lawsuit against them. There has been some data that soccer players display loss of brain matter that is associated with cognitive impairment, but such studies were correlational in nature.

Now, Di Virgilio et al. (2016) conducted a study designed to explore the consequences of soccer heading in more detail. They recruited 19 young amateur soccer players, mostly male, who were instructed to perform 20 rotational headings as if responding to corner kicks in a game. The ball was delivered by a machine at a speed of approximately 38 kph. The mean force of impact for the group was 13.1 ± 1.9 g. Immediately after the heading session and at 24 h, 48 h and 2 weeks post-heading, the authors performed a series of tests, among which are a transcranial magnetic stimulation (TMS) recording, a cognitive function assessment (by using the Cambridge Neuropsychological Test Automated Battery), and a postural control test.

Not being a TMS expert myself, I was wondering how do you record with a stimulator? TMS stimulates, it doesn’t measure anything. Or so I thought. The authors delivered brief  (1 ms) stimulating impulses to the brain area that controls the leg (primary motor cortex). Then they placed an electrode over the said muscle (rectus femoris or quadriceps femoris) and recorded how the muscle responded. Pretty neat. Moreover, the authors believe that they can make inferences about levels of inhibitory chemicals in the brain from the way the muscle responds. Namely, if the muscle is sluggish in responding to stimulation, then the brain released an inhibitory chemical, like GABA (gamma-amino butyric acid), hence calling this process corticomotor inhibition. Personally, I find this GABA inference a bit of a leap of faith, but, like I said, I am not fully versed in TMS studies so it may be well documented. Whether or not GABA is responsible for the muscle sluggishness, one thing is well documented though: this sluggishness is the most consistent finding in concussions.

The subjects had impaired short term and long term memory functions immediately after the ball heading, but not 24 h or more later. Also transient was the corticomotor inhibition. In other words, soccer ball heading results in measurable changes in brain function. Changes for the worst.

Even if these changes are transient, there is no knowing (as of yet) what prolonged ball heading might do. There is ample evidence that successive concussions have devastating effects on the brain. Granted, soccer heading does not produce concussions, at least in this paper’s setting, but I cannot think that even sub-concussion intensity brain disruption can be good for you.

On a lighter note, although the title of the paper features the word “soccer”, the rest o the paper refers to the game as “football”. I’ll let you guess the authors’ nationality or at least the continent of provenance ;).


Reference: Di Virgilio TG, Hunter A, Wilson L, Stewart W, Goodall S, Howatson G, Donaldson DI, & Ietswaart M. (Nov 2016, Epub 23 Oct 2016). Evidence for Acute Electrophysiological and Cognitive Changes Following Routine Soccer Heading. EBioMedicine, 13:66-71. PMID: 27789273, DOI: 10.1016/j.ebiom.2016.10.029. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 20 December 2016

Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.


unless you’re left-handed, then things are reversed.


1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016


Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.


Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

Drink before sleep

Among the many humorous sayings, puns, and jokes that one inevitably encounters on any social medium account, one that was popular this year was about the similarity between putting a 2 year old to bed and putting your drunk friend to bed, which went like this: they both sing to themselves, request water, mumble and blabber incoherently, do some weird yoga posses, cry, hiccup, and then they pass out. The joke manages to steal a smile only if someone has been through both situations, otherwise it looses its appeal.

Being exposed to both situations, I thought that while the water request from the drunk friend is a response to the dehydrating effects of alcohol, the water request from the toddler is probably nothing more than a delaying tactic to postpone bedtime. Whether there may or may not be some truth to my assumption in the case of the toddler, here is a paper to show that there is definitely more to the water request than meets the eye.

Generally, thirst is generated by the hypothalamus when its neurons and neurons from organum vasculosum lamina terminalis (OVLT) in the brainstem sense that the blood is either too viscous (hypovolaemia) or too salty (hyperosmolality), both phenomena indicating a need for water. Ingesting water would bring these indices to homeostatic values.

More than a decade ago, researchers observed that rodents get a good gulp of water just before going to sleep. This surge was not motivated by thirst because the mice were not feverish, were not hungry and they did not have a too viscous or a too salty blood. So why do it then? If the rodents are restricted from drinking the water they get dehydrated, so obviously the behavior has function. But is not motivated by thirst, at least not the way we know it. Huh… The authors call this “anticipatory thirst”, because it keeps the animal from becoming dehydrated later on.

Since the behavior occurs with regularity, maybe the neurons that control circadian rhythms have something to do with it. So Gizowski et al. (2016) took a closer look at  the activity of clock neurons from the suprachiasmatic nucleus (SCN), a well known hypothalamic nucleus heavily involved in circadian rhythms. The authors did a lot of work on SCN and OVLT neurons: fluorescent labeling, c-fos expression, anatomical tracing, optogenetics, genetic knockouts, pharmacological manipulations, electrophysiological  recordings, and behavioral experiments. All these to come to this conclusion:

SCN neurons release vasopressin and that excites the OVLT neurons via V1a receptors. This is necessary and sufficient to make the animal drink the water, even if it’s not thirsty.

That’s a lot of techniques used in a lot of experiments for only three authors. Ten years ago, you needed only one, maybe two techniques to prove the same point. Either there have been a lot of students and technicians who did not get credit (there isn’t even an Acknowledgements section. EDIT: yes, there is, see the comments below or, if they’re missing, the P.S.) or these three authors are experts in all these techniques. In this day and age, I wouldn’t be surprised by either option. No wonder small universities have difficulty publishing in Big Name journals; they don’t have the resources to compete. And without publishing, no tenure… And without tenure, less research… And thus shall the gap widen.

Musings about workload aside, this is a great paper, shedding light on yet another mysterious behavior and elucidating the mechanism behind it. There’s still work to be done though, like answering how accurate is the SCN in predicting bedtime to activate the drinking behavior. Does it take its cues from light only? Does ambient temperature play a role and so on. This line of work can help people that work in shifts to prevent certain health problems. Their SCN is out of rhythm and that can influence deleteriously the activity of a whole slew of organs.

Summary of the doi: 10.1038/nature19756 findings. 1) The light is a cue for suprachiasmatic nulceus (SCN) that bedtime is near. 2) The SCN vasopressin neurons that project to organum vasculosum lamina terminalis (OVLT) are activated. 3) The OVLT generates the anticipatory thirst. 4) The animal drinks fluids.

Reference: Gizowski C, Zaelzer C, & Bourque CW (28 Sep 2016). Clock-driven vasopressin neurotransmission mediates anticipatory thirst prior to sleep. Nature, 537(7622): 685-688. PMID: 27680940. DOI: 10.1038/nature19756. ARTICLE

By Neuronicus, 5 October 2016

EDIT (12 Oct 2016): P.S. The blog comments are automatically deleted after a period of time. In case of this post that would be a pity because I have been fortunate to receive comments from at least one of the authors of the paper, the PI, Dr. Charles Bourque and, presumably under pseudonym, but I don’t know that for sure, also the first author, Claire Gizowski. So I will include here, in a post scriptum, the main idea of their comments. Here is an excerpt from Dr. Bourque’s comment:

“Let me state for the record that Claire accomplished pretty much ALL of the work in this paper (there is a description of who did what at the end of the paper). More importantly, there were no “unthanked” undergraduates, volunteers or other parties that contributed to this work.”

My hat, Ms. Gizowski. It is tipped. To you. Congratulations! With such an impressive work I am sure I will hear about you again and that pretty soon I will blog about Dr. Gizowski.