Gaming can improve cognitive flexibility

It occurred to me that my blog is becoming more sanctimonious than I’d like. I have many posts about stuff that’s bad for you: stress, high fructose corn syrup, snow, playing soccer, cats, pesticides, religion, climate change, even licorice. So I thought to balance it a bit with stuff that is good for you. To wit, computer games; albeit not all, of course.

An avid gamer myself, those who know me would hardly be surprised that I found a paper cheering StarCraft. A bit of an old game, but still a solid representative of the real-time strategy (RTS) genre.

About a decade ago, a series of papers emerged which showed that first-person shooters and action games in general improve various aspects of perceptual processing. It makes sense because in these games split second decisions and actions make the difference between win or lose, so the games act as training experience for increased sensitivity to cues that facilitate said decisions. But what about games where the overall strategy and micromanagement skills are a bit more important than the perceptual skills, a.k.a. RTS? Would these games improve the processes underlying strategical thinking in a changing environment?

Glass, Maddox, & Love (2013) sought to answer this question by asking a few dozen undergraduates with little gaming experience to play a slightly modified StarCraft game for 40 hours (1 hour per day). “StarCraft (published by Blizzard Entertainment, Inc. in 1998) (…) involves the creation, organization, and command of an army against an enemy army in a real-time map-based setting (…) while managing funds, resources, and information regarding the opponent ” (p. 2). The participants were all female because they couldn’t find enough male undergraduates that played computer games less than 2 hours per day. The control group had to play The Sims 2 for the same amount of time, a game where “participants controlled and developed a single ‘‘family household’’ in a virtual neighborhood” (p.3.). The researchers cleverly modified the StarCraft game in such a way that they replaced a perceptual component with a memory component (disabled some maps) and created two versions: one more complex (full-map, two friendly, two enemy bases) and one less so (half-map, one friendly, one enemy bases). The difficulty for all games was set at a win rate of 50%.

Before and after the game-playing, the subjects were asked to complete a huge battery of tests destined to test their memory and various other cognitive processes. By carefully parsing these out, the authors conclude that “forty hours of training within an RTS game that stresses rapid and simultaneous maintenance, assessment, and coordination between multiple information and action sources was sufficient” to improve cognitive flexibility. Moreover, authors point out that playing on a full-map with multiple allies and enemies is conducive to such improvement, whereas playing a less cognitive resources demanding game, despite similar difficulty levels, was not. Basically, the more stuff you have to juggle, the better your flexibility will be. Makes sense.

My favorite take from this paper though is not only that StarCraft is awesome, obviously, but that “cognitive flexibility is a trainable skill” (p. 5). Let me tell you why that is so grand.

Cognitive flexibility is an important concept in the neuroscience of executive functioning. The same year that this paper was published, Diamond was publishing an excellent review paper in which she neatly identified three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Unlike some old views on the immutability of the inborn IQ, each one of the core and higher-order executive functions can be improved upon with training at any point in life and can suffer if something is not right in your life (stress, loneliness, sleep-deprived or sick). This paper adds to the growing body of evidence showing that executive functions can be trainable. Intelligence, however you want to define it, relies upon executive functions, at least some of them, and perhaps boosting cognitive flexibility might result in a slight increase in the IQ, methinks.

Bottom line: real-time strategy games with huge maps and tons of stuff to do are good for you. Here you go.

154 starcraft - Copy
The StarCraft images, both foreground and background, are copyrighted to © 1998 Blizzard Entertainment.

REFERENCES:

  1. Glass BD, Maddox WT, Love BC. (7 Aug 2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS One, 2;8(8):e70350. eCollection 2013. PMID: 23950921, PMCID: PMC3737212, DOI: 10.1371/journal.pone.0070350. ARTICLE | FREE FULLTEXT PDF
  2. Diamond A (2013, Epub 27 Sept. 2012). Executive Functions. 64:135-68. PMID: 23020641, PMCID: PMC4084861, DOI: 10.1146/annurev-psych-113011-143750. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 15 June 2019

Epigenetics of BDNF in depression

Depression is the leading cause of disability worldwide, says the World Health Organization. The. The. I knew it was bad, but… ‘the’? More than 300 million people suffer from it worldwide and in many places fewer than 10% of these receive treatment. Lack of treatment is due to many things, from lack of access to healthcare to lack of proper diagnosis; and not in the least due to social stigma.

To complicate matters, the etiology of depression is still not fully elucidated, despite hundreds of thousand of experimental articles published out-there. Perhaps millions. But, because hundreds of thousands of experimental articles perhaps millions have been published, we know a helluva a lot about it than, say, 50 years ago. The enormous puzzle is being painstakingly assembled as we speak by scientists all over the world. I daresay we have a lot of pieces already, if not all at least 3 out of 4 corners, so we managed to build a not so foggy view of the general picture on the box lid. Here is one of the hottest pieces of the puzzle, one of those central pieces that bring the rabbit into focus.

Before I get to the rabbit, let me tell you about the corners. In the fifties people thought that depression is due to having too little neurotransmitters from the monoamine class in the brain. This thought did not arise willy-nilly, but from the observation that drugs that increase monoamine levels in the brain alleviate depression symptoms, and, correspondingly, drugs which deplete monoamines induce depression symptoms. A bit later on, the monoamine most culpable was found to be serotonin. All well and good, plenty of evidence, observational, correlational, causational, and mechanistic supporting the monoamine hypothesis of depression. But two more pieces of evidence kept nagging the researchers. The first one was that the monoamine enhancing drugs take days to weeks to start working. So, if low on serotonin is the case, then a selective serotonin reuptake inhibitor (SSRI) should elevate serotonin levels within maximum an hour of ingestion and lower symptom severity, so how come it takes weeks? The second was even more eyebrow raising: these monoamine-enhancing drugs work in about 50 % of the cases. Why not all? Or, more pragmatically put, why not most of all if the underlying cause is the same?

It took decades to address these problems. The problem of having to wait weeks until some beneficial effects of antidepressants show up has been explained away, at least partly, by issues in the serotonin regulation in the brain (e.g. autoreceptors senzitization, serotonin transporter abnormalities). As for the second problem, the most parsimonious answer is that that archeological site called DSM (Diagnostic and Statistical Manual of Mental Disorders), which psychologists, psychiatrists, and scientists all over the world have to use to make a diagnosis is nothing but a garbage bag of last century relics with little to no resemblance of this century’s understanding of the brain and its disorders. In other words, what DSM calls major depressive disorder (MDD) may as well be more than one disorder and then no wonder the antidepressants work only in half of the people diagnosed with it. As Goldberg put it in 2011, “the DSM diagnosis of major depression is made when a patient has any 5 out of 9 symptoms, several of which are opposites [emphasis added]”! He was referring to DSM-4, not that the 5 is much different. I mean, paraphrasing Goldberg, you really don’t need much of a degree other than some basic intro class in the physiology of whatever, anything really, to suspect that someone who’s sleeping a lot, gains weight, has increased appetite, appears tired or slow to others, and feels worthless might have a different cause for these symptoms than someone who has daily insomnias, lost weight recently, has decreased appetite, is hyperagitated, irritable, and feels excessive guilt. Imagine how much more understanding we would have about depression if scientists didn’t use the DSM for research. No wonder that there’s a lot of head scratching when your hypothesis, which is logically correct, paradigmatically coherent, internally consistent, flawlessly tested, turns out to be true only sometimes because your ‘depressed’ subjects are as a homogeneous group as a pack of Trail Mix.

I got sidetracked again. This time ranting against DSM. No matter, I’m back on track. So. The good thing about the work done trying to figure out how antidepressants work and psychiatrists’ minds work (DSM is written overwhelmingly by psychiatrists), scientists uncovered other things about depression. Some of the findings became clumped under the name ‘the neurotrophic hypothesis of depression’ in the early naughts. It stems from the finding that some chemicals needed by neurons for their cellular happiness are in low amount in depression. Almost two decades later, the hypothesis became mainstream theory as it explains away some other findings in depression, and is not incompatible with the monoamines’ behavior. Another piece of the puzzle found.

One of these neurotrophins is called brain-derived neurotrophic factor (BDNF), which promotes cell survival and growth. Crucially, it also regulates synaptic plasticity, without which there would be no learning and no memory. The idea is that exposure to adverse events generates stress. Stress is differently managed by different people, largely due to genetic factors. In those not so lucky at the genetic lottery (how hard they take a stressor, how they deal with it), and in those lucky enough at genetics but not so lucky in life (intense and/or many stressors hit the organism hard regardless how well you take it or how good you are at it), stress kills a lot of neurons, literally, prevents new ones from being born, and prevents the remaining ones from learning well. Including learning on how to deal with the stressors, present and future, so the next time an adverse event happens, even if it is a minor stressor, the person is way more drastically affected. in other words, stress makes you more vulnerable to stressors. One of the ways stress is doing all these is by suppressing BDNF synthesis. Without BDNF, the individual exposed to stress that is exacerbated either by genes or environment ends up unable to self-regulate mood successfully. The more that mood is not regulated, the worse the brain becomes at self-regulating because the elements required for self-regulation, which include learning from experience, are busted. And so the vicious circle continues.

Maintaining this vicious circle is the ability of stressors to change the patterns of DNA expression and, not surprisingly, one of the most common findings is that the BDNF gene is hypermethylated in depression. Hypermethylation is an epigenetic change (a change around the DNA, not in the DNA itself), meaning that the gene in question is less expressed. This means lower amounts of BDNF are produced in depression.

After this long introduction, the today’s paper is a systematic review of one of epigenetic changes in depression: methylation. The 67 articles that investigated the role of methylation in depression were too heterogeneous to make a meta-analysis out of them, so Li et al. (2019) made a systematic review.

The main finding was that, overall, depression is associated with DNA methylation modifications. Two genes stood out as being hypermethylated: our friend BDNF and SLC6A4, a gene involved in the serotonin cycle. Now the question is who causes who: is stress methylating your DNA or does your methylated DNA make you more vulnerable to stress? There’s evidence both ways. Vicious circle, as I said. I doubt that for the sufferer it matters who started it first, but for the researchers it does.

151 bdnf 5htt people - Copy

A little disclaimer: the picture I painted above offers a non-exclusive view on the causes of depression(s). There’s more. There’s always more. Gut microbes are in the picture too. And circulatory problems. And more. But the picture is more than half done, I daresay. Continuing my puzzle metaphor, we got the rabbit by the ears. Now what to do with it…

Well, one thing we can do with it, even with only half-rabbit done, is shout loud and clear that depression is a physical disease. And those who claim it can be cured by a positive attitude and blame the sufferers for not ‘trying hard enough’ or not ‘smiling more’ or not ‘being more positive’ can bloody well shut up and crawl back in the medieval cave they came from.

REFERENCES:

1. Li M, D’Arcy C, Li X, Zhang T, Joober R, & Meng X (4 Feb 2019). What do DNA methylation studies tell us about depression? A systematic review. Translational Psychiatry, 9(1):68. PMID: 30718449, PMCID: PMC6362194, DOI: 10.1038/s41398-019-0412-y. ARTICLE | FREE FULLTEXT PDF

2. Goldberg D (Oct 2011). The heterogeneity of “major depression”. World Psychiatry, 10(3):226-8. PMID: 21991283, PMCID: PMC3188778. ARTICLE | FREE FULLTEXT PDF

3. World Health Organization Depression Fact Sheet

By Neuronicus, 23 April 2019

Milk-producing spider

In biology, organizing living things in categories is called taxonomy. Such categories are established based on shared characteristics of the members. These characteristics were usually visual attributes. For example, a red-footed booby (it’s a bird, silly!) is obviously different than a blue-footed booby, so we put them in different categories, which Aristotle called in Greek something like species.

Biological taxonomy is very useful, not only to provide countless hours of fight (both verbal and physical!) for biologists, but to inform us of all sorts of unexpected relationships between living things. These relationships, in turn, can give us insights into our own evolution, but also the evolution of things inimical to us, like diseases, and, perhaps, their cure. Also extremely important, it allows scientists from all over the world to have a common language, thus maximizing information sharing and minimizing misunderstandings.

148-who Am I - Copy

All well and good. And it was all well and good since Carl Linnaeus introduced his famous taxonomy system in the 18th Century, the one we still use today with species, genus, family, order, and kingdom. Then we figured out how to map the DNAs of things around us and this information threw out the window a lot of Linnean classifications. Because it turns out that some things that look similar are not genetically similar; likewise, some living things that we thought are very different from one another, turned out that, genetically speaking, they are not so different.

You will say, then, alright, out with visual taxonomy, in with phylogenetic taxonomy. This would be absolutely peachy for a minority of organisms of the planet, like animals and plants, but a nightmare in the more promiscuous organisms who have no problem swapping bits of DNA back and forth, like some bacteria, so you don’t know anymore who’s who. And don’t even get me started on the viruses which we are still trying to figure out whether or not they are alive in the first place.

When I grew up there were 5 regna or kingdoms in our tree of life – Monera, Protista, Fungi, Plantae, Animalia – each with very distinctive characteristics. Likewise, the class Mammalia from the Animal Kingdom was characterized by the females feeding their offspring with milk from mammary glands. Period. No confusion. But now I have no idea (nor do many other biologists, rest assured) how many domains or kingdoms or empires we have, nor even what the definition of a species is anymore.

As if that’s not enough, even those Linnean characteristics that we thought set in stone are amenable to change. Which is good, shows the progress of science. But I didn’t think that something like the definition of mammal would change. Mammals are organisms whose females feed their offspring with milk from mammary glands, as I vouchsafed above. Pretty straightforward. And not spiders. Let me be clear on this: spiders did not feature in my – or anyone’s! – definition of mammals.

Until Chen et al. (2018) published their weird article a couple of weeks ago. The abstract is free for all to see and states that the females of a jumping spider species feed their young with milk secreted by their body until the age of subadulthood. Mothers continue to offer parental care past the maturity threshold. The milk is necessary for the spiderlings because without it they die. That’s all.

I read the whole paper since it was only 4 pages of it and here are some more details about their discovery. The species of spider they looked at is Toxeus magnus, a jumping spider that looks like an ant. The mother produces milk from her epigastric furrow and deposits it on the nest floor and walls from where the spiderlings ingest it (0-7 days). After the first week of this, the spiderlings suck the milk direct from the mother’s body and continue to do so for the next two weeks (7-20 days) when they start leaving the nest and forage for themselves. But they return and for the next period (20-40 days) they get their food both from the mother’s milk and from independent foraging. Spiderlings get weaned by day 40, but they still come home to sleep at night. At day 52 they are officially considered adults. Interestingly, “although the mother apparently treated all juveniles the same, only daughters were allowed to return to the breeding nest after sexual maturity. Adult sons were attacked if they tried to return. This may reduce inbreeding depression, which is considered to be a major selective agent for the evolution of mating systems (p. 1053).”

During all this time, including during the emergence into adulthood of the offsprings, the mother also supplied house maintenance, carrying out her children’s exuviae (shed exoskeletons) and repairing the nest.

The authors then did a series of experiments to see what role does the nursing and other maternal care at different stages play in the fitness and survival of the offsprings. Blocking the mother’s milk production with correction fluid immediately after hatching killed all the spiderlings, showing that they are completely dependent on the mother’s milk. Removing the mother after the spiderlings start foraging (day 20) drastically reduces survivorship and body size, showing that mother’s care is essential for her offsprings’ success. Moreover, the mother taking care of the nest and keeping it clean reduced the occurrence of parasite infections on the juveniles.

The authors analyzed the milk and it’s highly nutritious: “spider milk total sugar content was 2.0 mg/ml, total fat 5.3 mg/ml, and total protein 123.9 mg/ml, with the protein content around four times that of cow’s milk (p. 1053)”.

Speechless I am. Good for the spider, I guess. Spider milk will have exorbitant costs (Apparently, a slight finger pressure on the milk-secreting region makes the mother spider secret the milk, not at all unlike the human mother). Spiderlings die without the mother’s milk. Responsible farming? Spider milker qualifications? I’m gonna lay down, I got a headache.

148 spider milk - Copy

REFERENCE: Chen Z, Corlett RT, Jiao X, Liu SJ, Charles-Dominique T, Zhang S, Li H, Lai R, Long C, & Quan RC (30 Nov. 2018). Prolonged milk provisioning in a jumping spider. Science, 362(6418):1052-1055. PMID: 30498127, DOI: 10.1126/science.aat3692. ARTICLE | Supplemental info (check out the videos)

By Neuronicus, 13 December 2018

Pic of the day: Dopamine from a non-dopamine place

147 lc da ppvn - Copy

Reference: Beas BS, Wright BJ, Skirzewski M, Leng Y, Hyun JH, Koita O, Ringelberg N, Kwon HB, Buonanno A, & Penzo MA (Jul 2018, Epub 18 Jun 2018). The locus coeruleus drives disinhibition in the midline thalamus via a dopaminergic mechanism. Nature Neuroscience,21(7):963-973. PMID: 29915192, PMCID: PMC6035776 [Available on 2018-12-18], DOI:10.1038/s41593-018-0167-4. ARTICLE

Pooping Legos

Yeah, alright… uhm… how exactly should I approach this paper? I’d better just dive into it (oh boy! I shouldn’t have said that).

The authors of this paper were adult health-care professionals in the pediatric field. These three males and three females were also the participants in the study. They kept a poop-diary noting the frequency and volume of bowel movements (Did they poop directly on a scale or did they have to scoop it out in a bag?). The researchers/subjects developed a Stool Hardness and Transit (SHAT) metric to… um.. “standardize bowel habit between participants” (p. 1). In other words, to put the participants’ bowel movements on the same level (please, no need to visualize, I am still stuck at the poop-on-a-scale phase), the authors looked – quite literally – at the consistency of the poop and gave it a rating. I wonder if they checked for inter-rater reliability… meaning did they check each other’s poops?…

Then the researchers/subjects ingested a Lego figurine head, on purpose, somewhere between 7 and 9 a.m. Then they timed how much time it took to exit. The FART score (Found and Retrieved Time) was 1.71 days. “There was some evidence that females may be more accomplished at searching through their stools than males, but this could not be statistically validated” due to the small sample size, if not the poops’. It took 1 to 3 stools for the object to be found, although poor subject B had to search through his 13 stools over a period of 2 weeks to no avail. I suppose that’s what you get if you miss the target, even if you have a PhD.

The pre-SHAT and SHAT score of the participants did not differ, suggesting that the Lego head did not alter the poop consistency (I got nothin’ here; the authors’ acronyms are sufficient scatological allusion). From a statistical standpoint, the one who couldn’t find his head in his poop (!) should not have been included in the pre-SHAT score group. Serves him right.

I wonder how they searched through the poop… A knife? A sieve? A squashing spatula? Gloved hands? Were they floaters or did the poop sink at the base of the toilet? Then how was it retrieved? Did the researchers have to poop in a bucket so no loss of data should occur? Upon direct experimentation 1 minute ago, I vouchsafe that a Lego head is completely buoyant. Would that affect the floatability of the stool in question? That’s what I’d like to know. Although, to be fair, no, that’s not what I want to know; what I desire the most is a far larger sample size so some serious stats can be conducted. With different Lego parts. So they can poop bricks. Or, as suggested by the authors, “one study arm including swallowing a Lego figurine holding a coin” (p. 3) so one can draw parallels between Lego ingestion and coin ingestion research, the latter being, apparently, far more prevalent. So many questions that still need to be answered! More research is needed, if only grants would be so… regular as the raw data.

The paper, albeit short and to the point, fills a gap in our scatological knowledge database (Oh dear Lord, stop me!). The aim of the paper was to show that ingested objects by children tend to pass without a problem. Also of value, the paper asks pediatricians to counsel the parents to not search for the object in the faeces to prove object retrieval because “if an experienced clinician with a PhD is unable to adequately find objects in their own stool, it seems clear that we should not be expecting parents to do so” (p. 3). Seems fair.

146 lego poop - Copy

REFERENCE: Tagg, A., Roland, D., Leo, G. S., Knight, K., Goldstein, H., Davis, T. and Don’t Forget The Bubbles (22 November 2018). Everything is awesome: Don’t forget the Lego. Journal of Paediatrics and Child Health, doi: 10.1111/jpc.14309. ARTICLE

By Neuronicus, 27 November 2017

Raising a child costs 13 million calories

That’s right. Somebody actually did the math on that. Kaplan in 1994, to be exact.

The anthropologist and his colleague, Kate Kopischke, looked at how three semi-isolated populations from South America live. Between September 1988 and May 1989, the researchers analyzed several variables meant to shed light mainly on fertility rate and wealth flow. They measured the amount of time spent taking care of children. They estimated the best time to have a second child. They weighed the food of these communities. And then they estimated the caloric intake and expenditure per day per individual.

Human children are unable to provision for themselves until about the age of 18. So most of their caloric intake requirements are provided by their parents. Long story (39 pages) short, Kaplan (1994) concluded that a child relies on 13 million calories provided by the adults. Granted, these are mostly hunter-gatherer communities, so the number may be a bit off from your average American child. The question is: which way? Do American kids “cost” more or less?

143 13 mil calories - Copy

P.S. I was reading a paper, Kohl (2018), in the last week’s issue of Science that quoted this number, 13 million. When I went to the cited source, Hrdy (2016), that one was citing yet another one, the above-mentioned Kaplan (1994) paper. Luckily for Kohl, Hrdy cited Kaplan correctly. But I must tell you from my own experience, half of the time when people cite other people citing other people citing original research, they are wrong. Meaning that somewhere in the chain somebody got it wrong or twisted the original research finding for their purposes. Half of the time, I tell you. People don’t go for the original material because it can be a hassle to dig it out, or it’s hard to read, or because citing a more recent paper looks better in the review process. But that comes to the risk of being flat wrong. The moral: always, always, go for the source material.

P.P.S. To be clear, I’m not accusing Kohl of not reading Kaplan because accusing an academic of citing without reading or being unfamiliar with seminal research in their field (that is, seminal in somebody else’s opinion) is a tremendous insult not be wielded lightly by bystanders but to be viciously used only for in-house fights on a regular basis. No. I’m saying that Kohl got that number second-hand and that’s frowned upon. The moral: always, always, go for the source material. I can’t emphasize this enough.

P.P.P..S. Ah, forget it. P.S. 3. Upon reading my blog, my significant other’s first question was: “Well, how much is that in potatoes?” I had to do the math on a Post-It and the answer is: 50,288 large skinless potatoes, boiled without salt. That’s 15,116 Kg of potatoes, more than 15 metric tones. Here you go. Happy now? Why are we talking about potatoes?! No, I don’t know how many potatoes would fit into a house. Jeez!

REFERENCE: Kaplan, H. (Dec. 1994). Evolutionary and Wealth Flows Theories of Fertility: Empirical Tests and New Models. Population and Development Review, Vol. 20, No. 4, pp. 753-791. DOI: 10.2307/2137661. ARTICLE

By Neuronicus, 22 October 2018

Locus Coeruleus in mania

From all the mental disorders, bipolar disorder, a.k.a. manic-depressive disorder, has the highest risk for suicide attempt and completion. If the thought of suicide crosses your mind, stop reading this, it’s not that important; what’s important is for you to call the toll-free National Suicide Prevention Lifeline at 1-800-273-TALK (8255).

The bipolar disorder is defined by alternating manic episodes of elevated mood, activity, excitation, and energy with episodes of depression characterized by feelings of deep sadness, hopelessness, worthlessness, low energy, and decreased activity. It is also a more common disease than people usually expect, affecting about 1% or more of the world population. That means almost 80 million people! Therefore, it’s imperative to find out what’s causing it so we can treat it.

Unfortunately, the disease is very complex, with many brain parts, brain chemicals, and genes involved in its pathology. We don’t even fully comprehend how the best medication we have to lower the risk of suicide, lithium, works. The good news is the neuroscientists haven’t given up, they are grinding at it, and with every study we get closer to subduing this monster.

One such study freshly published last month, Cao et al. (2018), looked at a semi-obscure membrane protein, ErbB4. The protein is a tyrosine kinase receptor, which is a bit unfortunate because this means is involved in ubiquitous cellular signaling, making it harder to find its exact role in a specific disorder. Indeed, ErbB4 has been found to play a role in neural development, schizophrenia, epilepsy, even ALS (Lou Gehrig’s disease).

Given that ErbB4 is found in some neurons that are involved in bipolar and mutations in its gene are also found in some people with bipolar, Cao et al. (2018) sought to find out more about it.

First, they produced mice that lacked the gene coding for ErbB4 in neurons from locus coeruleus, the part of the brain that produces norepinephrine out of dopamine, better known for the European audience as nor-adrenaline. The mutant mice had a lot more norepinephrine and dopamine in their brains, which correlated with mania-like behaviors. You might have noticed that the term used was ‘manic-like’ and not ‘manic’ because we don’t know for sure how the mice feel; instead, we can see how they behave and from that infer how they feel. So the researchers put the mice thorough a battery of behavioral tests and observed that the mutant mice were hyperactive, showed less anxious and depressed behaviors, and they liked their sugary drink more than their normal counterparts, which, taken together, are indices of mania.

Next, through a series of electrophysiological experiments, the scientists found that the mechanism through which the absence of ErbB4 leads to mania is making another receptor, called NMDA, in that brain region more active. When this receptor is hyperactive, it causes neurons to fire, releasing their norepinephrine. But if given lithium, the mutant mice behaved like normal mice. Correspondingly, they also had a normal-behaving NMDA receptor, which led to normal firing of the noradrenergic neurons.

So the mechanism looks like this (Jargon alert!):

No ErbB4 –> ↑ NR2B NMDAR subunit –> hyperactive NMDAR –> ↑ neuron firing –> ↑ catecholamines –> mania.

In conclusion, another piece of the bipolar puzzle has been uncovered. The next obvious step will be for the researchers to figure out a medicine that targets ErbB4 and see if it could treat bipolar disorder. Good paper!

142 erbb4 - Copy

P.S. If you’re not familiar with the journal eLife, go and check it out. The journal offers for every study a half-page summary of the findings destined for the lay audience, called eLife digest. I’ve seen this practice in other journals, but this one is generally very well written and truly for the lay audience and the non-specialist. Something of what I try to do here, minus the personal remarks and in parenthesis metacognitions that you’ll find in most of my posts. In short, the eLife digest is masterly done. As my continuous struggles on this blog show, it is tremendously difficult for a scientist to write concisely, precisely, and jargonless at the same time. But eLife is doing it. Check it out. Plus, if you care to take a look on how science is done and published, eLife publishes all the editor’s rejection notes, all the reviewers’ comments, and all the author responses for a particular paper. Reading those is truly a teaching moment.

REFERENCE: Cao SX, Zhang Y, Hu XY, Hong B, Sun P, He HY, Geng HY, Bao AM, Duan SM, Yang JM, Gao TM, Lian H, Li XM (4 Sept 2018). ErbB4 deletion in noradrenergic neurons in the locus coeruleus induces mania-like behavior via elevated catecholamines. Elife, 7. pii: e39907. doi: 10.7554/eLife.39907. PMID: 30179154 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 14 October 2018

The Global Warming IPCC 2018 Report

The Special Report on Global Warming of 1.5ºC (SR15) was published two days ago, on October 8th, 2018. The Report was written by The Intergovernmental Panel on Climate Change (IPCC), “which is the UN body for assessing the science related to climate change. It was established by the United Nations Environment Programme (UN Environment) and the World Meteorological Organization (WMO) in 1988 to provide policymakers with regular scientific assessments concerning climate change, its implications and potential future risks, as well as to put forward adaptation and mitigation strategies.” (IPCC Special Report on Global Warming of 1.5ºC, Press Release).

The Report’s findings are very bad. Its Summary for Policymakers starts with:

“Human activities are estimated to have caused approximately 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C. Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate.”

That’s 12 years from now.

IPCC 2018 - Copy
Extract from the IPCC (2018), Global Warming of 1.5 ºC, Summary for Policymakers. “Observed monthly global mean surface temperature (GMST) change grey line up to 2017, from the HadCRUT4, GISTEMP, Cowtan – Way, and NOAA datasets) and estimated anthropogenic global warming (solid orange line up to 2017, with orange shading indicating assessed likely range). Orange dashed arrow and horizontal orange error bar show respectively central estimate and likely range of the time at which 1.5°C is reached if the current rate of warming continues. The grey plume on the right of  shows the likely range of warming responses, computed with a simple climate model, to a stylized pathway (hypothetical future) in which net CO2 emissions  decline in a straight line from 2020 to reach net zero in 2055 and net non – CO2 radiative forcing increases to 2030 and then declines. “

Which means that we warmed up the world by 1.0°C (1.8°F) since 1850-1900. Continuing the way we have been doing, we will add another 0.5°C (0.9°F) to the world temperature sometime between 2030 and 2052, making the total human-made global warming to 1.5°C (2.7°F).

That’s 12 years from now.

Half a degree Celsius doesn’t sound so bad until you look at the highly confident model prediction saying that gaining that extra 0.5°C (0.9°F) will result in terrible unseen before superstorms and precipitation in some regions while others will suffer prolonged droughts, along with extreme heat waves and sea level rises due to the melting of Antarctica. From a biota point of view, if we reach the 1.5°C (2.7°F) threshold, most of the coral reefs will become extinct, as well as thousands of other species (6% of insects, 8% of plants, and 4% of vertebrates).

That’s 12 years from now.

All these will end up increasing famine, homelessness, disease, inequality, poverty, and refugee numbers to unprecedented levels. Huge spending of money on infrastructure, rebuilding, help efforts, irrigation, water supplies, and so on, for those inclined to be more concerned by finances. To put it bluntly, a 1.5°C (2.7°F) increase in global warming costs us about $54 trillion.

That’s 12 years from now.

These effects will persist for centuries to millennia. To stay at the 1.5°C (2.7°F)  limit we need to reduce the carbon emissions by 50% by 2030 and achieve 0 emissions by 2050.

That’s 12 years from now.

The Report emphasizes that a 1.5°C (2.7°F)  increase is not as bad as a 2°C (3.6°F), where we will loose double of the biota, the storms will be worse, the droughts longer, and altogether a more catastrophic scenario.

Technically, we ARE ABLE to limit the warming at 1.5°C (2.7°F), If, by 2050, we rely on renewable energy, like solar and wind, to supply 70-85% of energy, we will be able to stay at the 1.5°C (2.7°F). Lower the coal use as energy source to single digits percentages. Expanding forests and implementing large CO2 capture programs would help tremendously. Drastically reduce carbon emissions by, for example, hitting polluters with crippling fines. But all this requires rapid implementation of heavy laws and regulation, which will come from a concentrated effort of our leaders.

Therefore, politically, we ARE UNABLE to limit the warming at 1.5°C (2.7°F). Instead, it’s very likely that we will warm the planet by 2°C (3.6°F) in the next decades. If we do nothing, by the end of the century the world will be even hotter, being warmed up by 3°C (5.4°F) and there are no happy scenarios then as the climate change will be beyond our control. That is, our children’s control.

141-ipcc hope - Copy

There are conspiracy theorists out there claiming that there are nefarious or hidden reasons behind this report, or that its conclusions are not credible, or that it’s not legit, or it’s bad science, or that it represents the view of a fringe group of scientists and does not reflect a scientific consensus. I would argue that people who claim such absurdities are either the ones with a hidden agenda or are plain idiots. Not ignorants, because ignorance is curable and whoever seeks to learn new things is to be admired. Not honest questioning either, because that is as necessary to science as the water to the fish. Willful ignorance, on the other hand, I call idiocy and is remarkably resistant to presentation of facts. FYI, the Report was conducted by a Panel commissioned by an organization comprising 195 countries, is authored by 91 scientists, has an additional 133 contributing authors, all these spanning 40 countries, analyzing over 6000 scientific studies. Oh, and the Panel received the 2007 Nobel Peace Prize. I daresay it looks legit. The next full climate assessment will be released in 2021.

141 Climate change - Copy

REFERENCES:

  1. The Intergovernmental Panel on Climate Change (IPCC) (2018). Global Warming of 1.5 ºC, an IPCC Special report on the impacts of global warming of 1.5 ºC above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. Retrieved 10 October 2018Website  | Covers: New York Times | Nature  | The Washington Post | The Guardian | The Economist | ABC News | Deutsche Welle | CNN | HuffPost Canada| Los Angeles Times | BBC | Time .
  2. The IPCC Summary for Policymakers PDF
  3. The IPCC Press Release PDF
  4. The 2007 Nobel Peace Prize.

By Neuronicus, 10 October 2018

Pic of the day: Total amount of DNA on Earth

139 DNA amount, better font - Copy

Approximately… give or take…

REFERENCE: Landenmark HKE, Forgan DH, & Cockell CS (11 Jun 2915). An Estimate of the Total DNA in the Biosphere. PLoS Biology, 13(6): e1002168. PMCID: PMC4466264, PMID: 26066900, DOI: 10.1371/journal.pbio.1002168. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 September 2018

The FIRSTS: The cause(s) of dinosaur extinction

A few days ago, a follower of mine gave me an interesting read from The Atlantic regarding the dinosaur extinction. Like many of my generation, I was taught in school that dinosaurs died because an asteroid hit the Earth. That led to a nuclear winter (or a few years of ‘nuclear winters’) which killed the photosynthetic organisms, and then the herbivores didn’t have anything to eat so they died and then the carnivores didn’t have anything to eat and so they died. Or, as my 4-year-old puts it, “[in a solemn voice] after the asteroid hit, big dusty clouds blocked the sun; [in an ominous voice] each day was colder than the previous one and so, without sunlight to keep them alive [sad face, head cocked sideways], the poor dinosaurs could no longer survive [hands spread sideways, hung head] “. Yes, I am a proud parent. Now I have to do a sit-down with the child and explain that… What, exactly?

Well, The Atlantic article showcases the struggles of a scientist – paleontologist and geologist Gerta Keller – who doesn’t believe the mainstream asteroid hypothesis; rather she thinks there is enough evidence to point out that extreme volcano eruptions, like really extreme, thousands of times more powerful than anything we know in the recorded history, put out so much poison (soot, dust, hydrofluoric acid, sulfur, carbon dioxide, mercury, lead, and so on) in the atmosphere that, combined with the consequent dramatic climate change, killed the dinosaurs. The volcanoes were located in India and they erupted for hundreds of thousands of years, but most violent eruptions, Keller thinks, were in the last 40,000 years before the extinction. This hypothesis is called the Deccan volcanism from the region in India where these nasty volcanoes are located, first proposed by Vogt (1972) and Courtillot et al. (1986).

138- Vogt - Copy.jpg

So which is true? Or, rather, because this is science we’re talking about, which hypothesis is more supported by the facts: the volcanism or the impact?

The impact hypothesis was put forward in 1980 when Walter Alvarez, a geologist, noticed a thin layer of clay in rocks that were about 65 million years old, which coincided with the time when the dinosaurs disappeared. This layer is on the KT boundary (sometimes called K-T, K-Pg, or KPB, looks like the biologists are not the only ones with acronym problems) and marks the boundary between the Cretaceous and Paleogenic geological periods (T is for Triassic, yeah, I know). Walter asked his father, the famous Nobel Prize physicist Louis Alvarez, to take a look at it and see what it is. Alvarez Sr. analyzed it and decided that the clay contains a lot of iridium, dozens of times more than expected. After gathering more samples from Europe and New Zealand, they published a paper (Alvarez et al., 1980) in which the scientists reasoned that because Earth’s iridium is deeply buried in its bowels and not in its crust, this iridium at the K-Pg boundary is of extraterrestrial origin, which could be brought here only by an asteroid/comet. This is also the paper in which it was put forth for the first time the conjecture that the asteroid impact killed the dinosaurs, based on the uncanny coincidence of timing.

138-alvarez - Copy

The discovery of the Chicxulub crater in Mexico followed a more sinuous path because the geophysicists who first discovered it in the ’70s were working for an oil company, looking for places to drill. Once the dinosaur-died-due-to-asteroid-impact hypothesis gained popularity outside academia, the geologists and the physicists put two-and-two together, acquired more data, and published a paper (Hildebrand et al., 1991) where the Chicxulub crater was for the first time linked with the dinosaur extinction. Although the crater was not radiologically dated yet, they had enough geophysical, stratigraphic, and petrologic evidence to believe it was as old as the iridium layer and the dinosaur die-out.

138-chicxulub - Copy

But, devil is in the details, as they say. Keller published a paper in 2007 saying the Chicxulub event predates the extinction by some 300,000 years (Keller et al., 2007). She looked at geological samples from Texas and found the glass granule layer (indicator of the Chicxhulub impact) way below the K-Pg boundary. So what’s up with the iridium then? Keller (2014) believes that is not of extraterrestrial origin and it might well have been spewed up by a particularly nasty eruption or the sediments got shifted. Schulte et al. (2010), on the other hand, found high levels of iridium in 85 samples from all over the world in the KPG layer. Keller says that some other 260 samples don’t have iridium anomalies. As a response, Esmeray-Senlet et al. (2017) used some fancy Mass Spectrometry to show that the iridium profiles could have come only from Chicxulub, at least in North America. They argue that the variability in iridium profiles around the world is due to regional geochemical processes. And so on, and so on, the controversy continues.

Actual radioisotope dating was done a bit later in 2013: date of K-Pg is 66.043 ± 0.043 MA (millions of years ago), date of the Chicxulub crater is 66.038 ±.025/0.049 MA. Which means that the researchers “established synchrony between the Cretaceous-Paleogene boundary and associated mass extinctions with the Chicxulub bolide impact to within 32,000 years” (Renne et al., 2013), which is a blink of an eye in geological times.

138-66 chixhulub - Copy

Now I want you to understand that often in science, though by far not always, matters are not so simple as she is wrong, he is right. In geology, what matters most is the sample. If the sample is corrupted, so will be your conclusions. Maybe Keller’s or Renne’s samples were affected by a myriad possible variables, some as simple as shifting the dirt from here to there by who knows what event. After all, it’s been 66 million years since. Also, methods used are just as important and dating something that happened so long ago is extremely difficult due to intrinsic physical methodological limitations. Keller (2014), for example, claims that Renne couldn’t have possibly gotten such an exact estimation because he used Argon isotopes when only U-Pb isotope dilution–thermal ionization mass spectrometry (ID-TIMS) zircon geochronology could be so accurate. But yet again, it looks like he did use both, so… I dunno. As the over-used always-trite but nevertheless extremely important saying goes: more data is needed.

Even if the dating puts Chicxulub at the KPB, the volcanologists say that the asteroid, by itself, couldn’t have produced a mass extinction because there are other impacts of its size and they did not have such dire effects, but were barely noticeable at the biota scale. Besides, most of the other mass extinctions on the planet have been already associated with extreme volcanism (Archibald et al., 2010). On the other hand, the circumstances of this particular asteroid could have made it deadly: it landed in the hydrocarbon-rich areas that occupied only 13% of the Earth’s surface at the time which resulted in a lot of “stratospheric soot and sulfate aerosols and causing extreme global cooling and drought” (Kaiho & Oshima, 2017). Food for thought: this means that the chances of us, humans, to be here today are 13%!…

I hope that you do notice that these are very recent papers, so the issue is hotly debated as we speak.

It is possible, nay probable, that the Deccan volcanism, which was going on long before and after the extinction, was exacerbated by the impact. This is exactly what Renne’s team postulated in 2015 after dating the lava plains in the Deccan Traps: the eruptions intensified about 50,000 years before the KT boundary, from “high-frequency, low-volume eruptions to low-frequency, high-volume eruptions”, which is about when the asteroid hit. Also, the Deccan eruptions continued for about half a million years after KPB, “which is comparable with the time lag between the KPB and the initial stage of ecological recovery in marine ecosystems” (Renne et al., 2016, p. 78).

Since we cannot get much more accurate dating than we already have, perhaps the fossils can tell us whether the dinosaurs died abruptly or slowly. Because if they got extinct in a few years instead of over 50,000 years, that would point to a cataclysmic event. Yes, but which one, big asteroid or violent volcano? Aaaand, we’re back to square one.

Actually, the last papers on the matter points to two extinctions: the Deccan extinction and the Chicxulub extinction. Petersen et al., (2016) went all the way to Antarctica to find pristine samples. They noticed a sharp increase in global temperatures by about 7.8 ºC at the onset of Deccan volcanism. This climate change would surely lead to some extinctions, and this is exactly what they found: out of 24 species of marine animals investigated, 10 died-out at the onset of Deccan volcanism and the remaining 14 died-out when Chicxulub hit.

In conclusion, because this post is already verrrry long and is becoming a proper college review, to me, a not-a-geologist/paleontologist/physicist-but-still-a-scientist, things happened thusly: first Deccan traps erupted and that lead to a dramatic global warming coupled with spewing poison in the atmosphere. Which resulted in a massive die-out (about 200,000 years before the bolide impact, says a corroborating paper, Tobin, 2017). The surviving species (maybe half or more of the biota?) continued the best they could for the next few hundred thousand years in the hostile environment. Then the Chicxulub meteorite hit and the resulting megatsunami, the cloud of super-heated dust and soot, colossal wildfires and earthquakes, acid rain and climate cooling, not to mention the intensification of the Deccan traps eruptions, finished off the surviving species. It took Earth 300,000 to 500,000 years to recover its ecosystem. “This sequence of events may have combined into a ‘one-two punch’ that produced one of the largest mass extinctions in Earth history” (Petersen et al., 2016, p. 6).

138-timeline dinosaur - Copy

By Neuronicus, 25 August 2018

P. S. You, high school and college students who will use this for some class assignment or other, give credit thusly: Neuronicus (Aug. 26, 2018). The FIRSTS: The cause(s) of dinosaur extinction. Retrieved from https://scientiaportal.wordpress.com/2018/08/26/the-firsts-the-causes-of-dinosaur-extinction/ on [date]. AND READ THE ORIGINAL PAPERS. Ask me for .pdfs if you don’t have access, although with sci-hub and all… not that I endorse any illegal and fraudulent use of the above mentioned server for the purpose of self-education and enlightenment in the quest for knowledge that all academics and scientists praise everywhere around the Globe!

EDIT March 29, 2019. Astounding one-of-a-kind discovery is being brought to print soon. It’s about a site in North Dakota that, reportedly, has preserved the day of the Chicxhulub impact in amazing detail, with tons of fossils of all kinds (flora, mammals, dinosaurs, fish) which seems to put the entire extinction of dinosaurs in one day, thus favoring the asteroid impact hypothesis. The data is not out yet. Can’t wait til it is! Actually, I’ll have to wait some more after it’s out for the experts to examine it and then I’ll find out. Until then, check the story of the discovery here and here.

REFERENCES:

1. Alvarez LW, Alvarez W, Asaro F, & Michel HV (6 Jun 1980). Extraterrestrial cause for the cretaceous-tertiary extinction. PMID: 17783054. DOI: 10.1126/science.208.4448.1095 Science, 208(4448):1095-1108. ABSTRACT | FULLTEXT PDF

2. Archibald JD, Clemens WA, Padian K, Rowe T, Macleod N, Barrett PM, Gale A, Holroyd P, Sues HD, Arens NC, Horner JR, Wilson GP, Goodwin MB, Brochu CA, Lofgren DL, Hurlbert SH, Hartman JH, Eberth DA, Wignall PB, Currie PJ, Weil A, Prasad GV, Dingus L, Courtillot V, Milner A, Milner A, Bajpai S, Ward DJ, Sahni A. (21 May 2010) Cretaceous extinctions: multiple causes. Science,328(5981):973; author reply 975-6. PMID: 20489004, DOI: 10.1126/science.328.5981.973-aScience. FULL REPLY

3. Courtillot V, Besse J, Vandamme D, Montigny R, Jaeger J-J, & Cappetta H (1986). Deccan flood basalts at the Cretaceous/Tertiary boundary? Earth and Planetary Science Letters, 80(3-4), 361–374. doi: 10.1016/0012-821x(86)90118-4. ABSTRACT

4. Esmeray-Senlet, S., Miller, K. G., Sherrell, R. M., Senlet, T., Vellekoop, J., & Brinkhuis, H. (2017). Iridium profiles and delivery across the Cretaceous/Paleogene boundary. Earth and Planetary Science Letters, 457, 117–126. doi:10.1016/j.epsl.2016.10.010. ABSTRACT

5. Hildebrand AR, Penfield GT, Kring DA, Pilkington M, Camargo AZ, Jacobsen SB, & Boynton WV (1 Sept. 1991). Chicxulub Crater: A possible Cretaceous/Tertiary boundary impact crater on the Yucatán Peninsula, Mexico. Geology, 19 (9): 867-871. DOI: https://doi.org/10.1130/0091-7613(1991)019<0867:CCAPCT>2.3.CO;2. ABSTRACT

6. Kaiho K & Oshima N (9 Nov 2017). Site of asteroid impact changed the history of life on Earth: the low probability of mass extinction. Scientific Reports,7(1):14855. PMID: 29123110, PMCID: PMC5680197, DOI:10.1038/s41598-017-14199-x. . ARTICLE | FREE FULLTEXT PDF

7. Keller G, Adatte T, Berner Z, Harting M, Baum G, Prauss M, Tantawy A, Stueben D (30 Mar 2007). Chicxulub impact predates K–T boundary: New evidence from Brazos, Texas, Earth and Planetary Science Letters, 255(3–4): 339-356. DOI: 10.1016/j.epsl.2006.12.026. ABSTRACT

8. Keller, G. (2014). Deccan volcanism, the Chicxulub impact, and the end-Cretaceous mass extinction: Coincidence? Cause and effect? Geological Society of America Special Papers, 505:57–89. doi:10.1130/2014.2505(03) ABSTRACT

9. Petersen SV, Dutton A, & Lohmann KC. (5 Jul 2016). End-Cretaceous extinction in Antarctica linked to both Deccan volcanism and meteorite impact via climate change. Nature Communications, 7:12079. doi: 10.1038/ncomms12079. PMID: 27377632, PMCID: PMC4935969, DOI: 10.1038/ncomms12079. ARTICLE | FREE FULLTEXT PDF 

10. Renne PR, Deino AL, Hilgen FJ, Kuiper KF, Mark DF, Mitchell WS 3rd, Morgan LE, Mundil R, & Smit J (8 Feb 2013). Time scales of critical events around the Cretaceous-Paleogene boundary. Science, 8;339(6120):684-687. doi: 10.1126/science.1230492. PMID: 23393261, DOI: 10.1126/science.1230492 ABSTRACT 

11. Renne PR, Sprain CJ, Richards MA, Self S, Vanderkluysen L, Pande K. (2 Oct 2015). State shift in Deccan volcanism at the Cretaceous-Paleogene boundary, possibly induced by impact. Science, 350(6256):76-8. PMID: 26430116. DOI: 10.1126/science.aac7549 ABSTRACT

12. Schoene B, Samperton KM, Eddy MP, Keller G, Adatte T, Bowring SA, Khadri SFR, & Gertsch B (2014). U-Pb geochronology of the Deccan Traps and relation to the end-Cretaceous mass extinction. Science, 347(6218), 182–184. doi:10.1126/science.aaa0118. ARTICLE

13. Schulte P, Alegret L, Arenillas I, Arz JA, Barton PJ, Bown PR, Bralower TJ, Christeson GL, Claeys P, Cockell CS, Collins GS, Deutsch A, Goldin TJ, Goto K, Grajales-Nishimura JM, Grieve RA, Gulick SP, Johnson KR, Kiessling W, Koeberl C, Kring DA, MacLeod KG, Matsui T, Melosh J, Montanari A, Morgan JV, Neal CR, Nichols DJ, Norris RD, Pierazzo E,Ravizza G, Rebolledo-Vieyra M, Reimold WU, Robin E, Salge T, Speijer RP, Sweet AR, Urrutia-Fucugauchi J, Vajda V, Whalen MT, Willumsen PS.(5 Mar 2010). The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. Science, 327(5970):1214-8. PMID: 20203042, DOI: 10.1126/science.1177265. ABSTRACT

14. Tobin TS (24 Nov 2017). Recognition of a likely two phased extinction at the K-Pg boundary in Antarctica. Scientific Reports, 7(1):16317. PMID: 29176556, PMCID: PMC5701184, DOI: 10.1038/s41598-017-16515-x. ARTICLE | FREE FULLTEXT PDF 

15. Vogt, PR (8 Dec 1972). Evidence for Global Synchronism in Mantle Plume Convection and Possible Significance for Geology. Nature, 240(5380), 338–342. doi:10.1038/240338a0 ABSTRACT

How to wash SOME pesticides off produce

While EU is moving on with legislation to curtail harmful chemicals from our food, water, and air, USA is taking a few steps backwards. The most recent de-regulation concerns chlorphyrifos (CFP), a horrible pesticide banned in EU in 2008 (and in most of the world. China also prohibited its use on produce in 2016). CFP is associated with serious neurodevelopmental defects in humans and is highly toxic to the wildlife, particularly bees.

The paper that I’m covering today wanted to see if there is anything the consumer can do about pesticides in their produce. Unfortunately, they did not look at CFP. And why would they? At the time this study was conducted they probably thought, like the rest of us, that CFP is over and done with [breathe, slowly, inhale, exhale, repeat, focus].

Yang et al. (2017) bought organic Gala apples and then exposed them to two common pesticides: thiabendazole and phosmet (an organophosphate) at doses commonly used by farmers (125 ng/cm2). Then they washed the apples in three solutions: sodium bicarbonate (baking soda, NaHCO3, with the concentration of 10 mg/mL), Clorox (germicidal bleach with the concentration of 25 mg/L available chlorine) and tap water.

Before and after the washes the researchers used surface-enhanced Raman spectroscopy (which is, basically, a special way of doing microscopy) to take a closer look at the apples.

They found out that:

1) “Surface pesticide residues were most effectively removed by sodium bicarbonate (baking soda, NaHCO3) solution when compared to either tap water or Clorox bleach” (abstract).

2) The more you wash the more pesticide you remove. If you immerse apples in backing soda for 12 minutes for thiabendazole and 15 minutes for phosmet and then rinse with water there will be no detectable residue of these pesticides on the surface.

3) “20% of applied thiabendazole and 4.4% of applied phosmet penetrated into apples” (p. 9751) which cannot be removed by washing. Thiabendazole penetrates into the apple up to 80μ, which is four times more than phosmet (which goes up top 20 μm).

4) “the standard postharvest washing method with Clorox bleach solution for 2 min did not effectively remove surface thiabendazole” (p. 9748).

5) Phosmet is completely degraded by baking soda, whereas thiabenzole appears to be only partially so.

True to my nitpicking nature, I wish that the authors washed the apples in tap water for 8 minutes, not 2, like they did for Clorox and baking soda in the internal pesticide residue removal experiment. Nevertheless, the results stand as they are robust and their detection method is ultrasensitive being able to detect thiabendazole as low as 2μg/L and phosmet as low as 10 μg/L.

Thiabendazole is a pesticide that works by interfering with a basic enzymatic reaction in anaerobic respiration. I’m an aerobe so I shouldn’t worry about this pesticide too much unless I get a huge dose of it and then it is poisonous and carcinogenic, like most things in high doses. Phosmet, on the other hand, is an acetylcholinesterase (AChE) inhibitor (AChEI), meaning its effects in humans are akin to cholinergic poisoning. Normally, acetylcholine (ACh) binds to its muscarinic and nicotinic receptors in your muscles and brain for proper functioning of same. AChE breaks down ACh when is not needed any more by said muscles and brain. Therefore, an AChEI stops AChE from breaking down ACh resulting in overall more ACh than it’s good for you. Meaning it can kill you. Phosmet’s effects, in addition to, well…, death from acute poisoning, include trouble breathing, muscle weakness or tension, convulsions, anxiety, paralysis, quite possible memory, attention, and thinking impairments. Needles to say, it’s not so great for child development either. Think nerve gas, which is also an AChEI, and you’ll get a pretty good picture. Oh, it’s also a hormone mimicker.

I guess I’m back buying organic again. Long ago I have been duped for a short while into buying organic produce for my family believing, like many others, that it is pesticide-free. And, like many others, I was wrong. Just a bit of PubMed search told me that some of the “organic” pesticides are quite unpleasant. But I’ll take copper sulfate over chlorphyrifos any day. The choice is not from healthy to unhealthy but from bad to worse. I know, I know, the paper is not about CFP. I have a lot of pet peeves, alright?

Meanwhile, I gotta go make a huge batch of baking soda solution. Thanks, Yang et al. (2017)!

135 pesticide on apple - Copy

REFERENCE: Yang T, Doherty J, Zhao B, Kinchla AJ, Clark JM, & He L (8 Nov 2017, Epub 25 Oct 2017). Effectiveness of Commercial and Homemade Washing Agents in Removing Pesticide Residues on and in Apples. Journal of Agricultural and Food Chemistry, 65(44):9744-9752. PMID: 29067814, doi: 10.1021/acs.jafc.7b03118. ARTICLE

By Neuronicus, 19 May 2018

NASA, not media, is to blame for the Twin Study 7% DNA change misunderstanding

In case the title threw you out of the loop, let me pull you back in. In 2015, NASA sent Scott Kelly to the International Space Station while his twin brother, Mark, stayed on the ground. When Scott came back, NASA ran a bunch of tests on them to see how space affects human body. Some of the findings were published a few weeks ago. Among the findings, one caught the eyes of media who ran stories like:  Astronaut Scott Kelly now has different DNA to his identical twin brother after spending just a year in space (Daily Mail), Astronaut’s DNA no longer matches identical twin’s after time in space, NASA finds (Channel 3), Astronaut Scott Kelly’s genes show long-term changes after a year in space (NBC), Astronaut Scott Kelly is no longer an identical twin: How a year in space altered his DNA (Fox News), Scott Kelly Spent a Year in Space and Now His DNA Is Different From His Identical Twin’s (Time),  Nasa astronaut twins Scott and Mark Kelly no longer genetically identical after space trip (Telegraph), Astronaut’s DNA changes after spending year in space when compared to identical twin bother (The Independent), Astronaut Scott Kelly’s DNA No Longer Matches Identical Twin’s After a Year in Space (People), NASA study: Astronaut’s DNA no longer identical to his identical twin’s after year in space (The Hill), NASA astronaut who spent a year in space now has different DNA from his twin (Yahoo News), Scott Kelly: NASA Twins Study Confirms Astronaut’s DNA Actually Changed in Space (Newsweek), If you go into space for a long time, you come back a genetically different person (Quartz), Space can change your DNA, we just learned (Salon), NASA Confirms Scott Kelly’s Genes Have Been Altered By Space Travel (Tech Times), even ScienceAlert 😦 ran Scott Kelly’s DNA Is No Longer Identical to His Twin’s After a Year in Space.  And dozens and dozens more….

Even the astronauts themselves said their DNA is different and they are no longer twins:

133 mark kelly - Copy133 scott kelly - Copy

Alas, dear Scott & Mark Kelly, rest assured that despite these titles and their afferent stories, you two share the same DNA, still & forever. You are still identical twins until one of you changes species. Because that is what 7% alteration in human DNA means: you’re not human anymore.

So what gives?

Here is the root of all this misunderstanding:

“Another interesting finding concerned what some call the “space gene”, which was alluded to in 2017. Researchers now know that 93% of Scott’s genes returned to normal after landing. However, the remaining 7% point to possible longer term changes in genes related to his immune system, DNA repair, bone formation networks, hypoxia, and hypercapnia” (excerpt from NASA’s press release on the Twin Study on Jan 31, 2018, see reference).

If I wouldn’t know any better I too would think that yes, the genes were the ones who have changed, such is NASA’s verbiage. As a matter of actual fact, it is the gene expression which changed. Remember that DNA makes RNA and RNA makes protein? That’s the central dogma of molecular biology. A sequence of DNA that codes for a protein is called a gene. Those sequences do not change. But when to make a protein, how much protein, in what way, where to make this protein, which subtly different kinds of protein to make (alternative splicing), when not to make that protein, etc. is called the expression of that gene. And any of these aspects of gene expression are controlled or influenced by a whole variety of factors, some of these factors being environmental and as drastic as going to space or as insignificant as going to bed.

Some more scientifically inclined writers understood that the word “expression” was conspicuously missing from the above-mentioned paragraph and either ran clarification titles like After A Year In Space, NASA Astronaut’s Gene Expression Has Changed. Possibly Forever. (Huffington Post) or up-front rebukes like No, space did not permanently alter 7 percent of Scott Kelly’s DNA (The Verge) or No, Scott Kelly’s Year in Space Didn’t Mutate His DNA (National Geographic).

Now, I’d love, LOVE, I tell you, to jump to the throat of the media on this one so I can smugly show how superior my meager blog is when it comes to accuracy. But, I have to admit, this time is NASA’s fault. Although it is not NASA’s job to teach the central dogma of molecular biology to the media, they are, nonetheless, responsible for their own press releases. In this case, Monica Edwards and Laurie Abadie from NASA Human Research Strategic Communications did a booboo, in the words of the Sit-Com character Sheldon Cooper. Luckily for these two employees, the editor Timothy Gushanas published this little treat yesterday, right at the top of the press release:

“Editor’s note: NASA issued the following statement updating this article on March 15, 2018:

Mark and Scott Kelly are still identical twins; Scott’s DNA did not fundamentally change. What researchers did observe are changes in gene expression, which is how your body reacts to your environment. This likely is within the range for humans under stress, such as mountain climbing or SCUBA diving.

The change related to only 7 percent of the gene expression that changed during spaceflight that had not returned to preflight after six months on Earth. This change of gene expression is very minimal.  We are at the beginning of our understanding of how spaceflight affects the molecular level of the human body. NASA and the other researchers collaborating on these studies expect to announce more comprehensive results on the twins studies this summer.”

Good for you for rectifying your mistake, NASA! And good for you too the few media outlets that corrected their story like CNN who changed their title from Astronaut’s DNA no longer same as his identical twin, NASA finds to Astronaut’s gene expression no longer same as his identical twin, NASA finds.

But, seriously, NASA, what’s up with you guys keep screwing up molecular biology stuff?! Remember the arsenic-loving bacteria debacle? That paper is still not retracted  and that press release is still up on your website! Ntz, ntz, for shame… NASA, you need better understanding of basic science and/or better #Scicomm in your press releases. Hiring? I’m offering!

133 nasa twins - Copy (2)

REFERENCE: NASA. Edwards, M.  & Abadie, L. (Jan. 31, 2018). NASA Twins Study Confirms Preliminary Findings, Ed. Timothy Gushanas, retrieved on March 14,15, & 16, 2018. Address: https://www.nasa.gov/feature/nasa-twins-study-confirms-preliminary-findings

By Neuronicus, 16 March 2018

P.S. Sometimes is a pain to be obsessed with accuracy (cue in smallest violins). For example, I cannot stop myself from adding something just to be scrupulously correct. Since the day they were conceived, identical twins’ DNAs are starting to diverge. There are all sorts of things that do change the actual sequence of DNA. DNA can be damaged by radiation (which you can get a lot of in space) or exposure to some chemicals. Other changes are simply due to random mutations. So no twins are exactly identical, but the changes are so minuscule, nowhere near 1%, let alone 7%, that it is safe to say that their DNA is identical.

P.P.S. With all this hullabaloo about the 7% DNA change everybody glossed over and even I forgot to mention the one finding that is truly weird: the elongation of telomeres for Scott, the one that was in space. Telomeres are interesting things, they are repetitive sequences  of DNA (TTAGGG/AATCCC) at the end of the chromosomes that are repeated thousands of times. The telomere’s job is to protect the end of the chromosomes. You see, every time a cell divides the DNA copying machinery cannot copy the last bits of the chromosome (blame it on physics or chemistry, one of them things) and so some of it is lost. So evolution came up with a solution: telomeres, bits of unusable DNA that can be safely ignored and left behind. Or so we think at the moment. The length of telomeres has been implicated in some curious things, like cancer and life-span (immortality thoughts, anyone?). The most common finding is the shortening of telomeres associated with stress, but Scott’s were elongated, so that’s the first weird thing. I didn’t even know the telomeres can get elongated in living humans. But wait, there is more: NASA said that “the majority of those telomeres shortened within two days of Scott’s return to Earth”.  Now that is the second oddest thing! If I would be NASA that’s where I would put my money on, not on the gene expression patterns.

In Memoriam: Stephen Hawking

Yesterday, March 14, 2018, we lost a great mind and a decent human being. Thank you Dr. Stephen Hawking for showing us the Universe, the small and the big.

hawking2018 - Copy

I added his seminal doctoral thesis on the Free Resources page.

By Neuronicus, 15 March 2018

 

No Link Between Mass Shootings & Mental Illness

On Valentine’s Day another horrifying school mass shooting happened in USA, leaving 17 people dead. Just like after the other mass shootings, a lot of people – from media to bystanders, from gun lovers to gun critics, from parents to grandparents, from police to politicians – talk about the link between mental illness and mass shootings. As one with advanced degrees in both psychology and neuroscience, I am tired to explain over and over again that there is no significant link between the two! Mass shootings happen because an angry person has had enough sorrow, stress, rejection and/or disappointment that leads to hating the ones they think are responsible for it and HAS ACCESS TO A MASS KILLING WEAPON. Yeah, I needed the caps. Sometimes scientists too need to shout to be heard.

So here is the abstract of a book chapter called straightforwardly “Mass Shootings and Mental Illness”. The entire text is available at the links in the reference below.

From Knoll & Annas (2015):

“Common Misperceptions

  • Mass shootings by people with serious mental illness represent the most significant relationship between gun violence and mental illness.
  • People with serious mental illness should be considered dangerous.
  • Gun laws focusing on people with mental illness or with a psychiatric diagnosis can effectively prevent mass shootings.
  • Gun laws focusing on people with mental illness or a psychiatric diagnosis are reasonable, even if they add to the stigma already associated with mental illness.

Evidence-Based Facts

  • Mass shootings by people with serious mental illness represent less than 1% of all yearly gun-related homicides. In contrast, deaths by suicide using firearms account for the majority of yearly gun-related deaths.
  • The overall contribution of people with serious mental illness to violent crimes is only about 3%. When these crimes are examined in detail, an even smaller percentage of them are found to involve firearms.
  • Laws intended to reduce gun violence that focus on a population representing less than 3% of all gun violence will be extremely low yield, ineffective, and wasteful of scarce resources. Perpetrators of mass shootings are unlikely to have a history of involuntary psychiatric hospitalization. Thus, databases intended to restrict access to guns and established by guns laws that broadly target people with mental illness will not capture this group of individuals.
  • Gun restriction laws focusing on people with mental illness perpetuate the myth that mental illness leads to violence, as well as the misperception that gun violence and mental illness are strongly linked. Stigma represents a major barrier to access and treatment of mental illness, which in turn increases the public health burden”.

REFERENCE: Knoll, James L. & Annas, George D. (2015). Mass Shootings and Mental Illness. In book: Gun Violence and Mental Illness, Edition: 1st, Chapter: 4, Publisher: American Psychiatric Publishing, Editors: Liza H. Gold, Robert I. Simon. ISBN-10: 1585624985, ISBN-13: 978-1585624980. FULLTEXT PDF via ResearchGate | FULLTEXT PDF via Psychiatry Online

The book chapter is not a peer-reviewed document, even if both authors are Professors of Psychiatry. To quiet putative voices raising concerns about that, here is a peer-reviewed paper with open access that says basically the same thing:

Swanson et al. (2015) looked at large scale (thousands to tens of thousands of individuals) data to see if there is any relationship between violence, gun violence, and mental illness. They concluded that “epidemiologic studies show that the large majority of people with serious mental illnesses are never violent. However, mental illness is strongly associated with increased risk of suicide, which accounts for over half of US firearms–related fatalities”. The last sentence is reminiscent of the finding that stricter gun control laws lower suicide rate.

REFERENCE: Swanson JW, McGinty EE, Fazel S, Mays VM (May 2015). Mental illness and reduction of gun violence and suicide: bringing epidemiologic research to policy. Annals of Epidemiology, 25(5): 366–376. doi: 10.1016/j.annepidem.2014.03.004, PMCID: PMC4211925. FULLTEXT | FULLTEXT PDF.

Further peer-reviewed bibliography (links to fulltext pdfs):

  1. Guns, anger, and mental disorders: Results from the National Comorbidity Survey Replication (NCS-R): “a large number of individuals in the United States have anger traits and also possess firearms at home (10.4%) or carry guns outside the home (1.6%).”
  2. News Media Framing of Serious Mental Illness and Gun Violence in the United States, 1997-2012: “most news coverage occurred in the wake of mass shootings, and “dangerous people” with serious mental illness were more likely than “dangerous weapons” to be mentioned as a cause of gun violence.”
  3. The Link Between Mental Illness and Firearm Violence: Implications for Social Policy and Clinical Practice: “Firearm violence is a significant and preventable public health crisis. Mental illness is a weak risk factor for violence despite popular misconceptions reflected in the media and policy”.
  4. Using Research Evidence to Reframe the Policy Debate Around Mental Illness and Guns: Process and Recommendations: “restricting firearm access on the basis of certain dangerous behaviors is supported by the evidence; restricting access on the basis of mental illness diagnoses is not”.
  5. Mental Illness, Mass Shootings, and the Politics of American Firearms: “notions of mental illness that emerge in relation to mass shootings frequently reflect larger cultural stereotypes and anxieties about matters such as race/ethnicity, social class, and politics. These issues become obscured when mass shootings come to stand in for all gun crime, and when “mentally ill” ceases to be a medical designation and becomes a sign of violent threat”.

131 gun - Copy

By Neuronicus, 25 February 2018

Tomato transcriptome

As most children, growing up I showed little appreciation for what I had, coveting instead what I did not. Now I realize how fortunate I have been to have grown up half the time in a metropolis and the other half at the countryside. At the farm. A subsistence farm, although I truly loathe the term because we were not just subsisting but thriving off the land, as we planted and harvested a bit of everything and we had a specimen or four of almost all the farm animals, from bipeds to quadrupeds.

I got on this memory lane after reading the paper of Shinozaki et al. (2018) on tomatoes. It was a difficult read for me as it was punctured by many term definition lookups since botany evolved quite steeply since the last time I checked, about 25 years or so.

Briefly, the scientists grew tomato plants in a greenhouse at Cornell, NY. They harvested the fruit from 60 plants about 5 to 50 days after the flower was at its peak (DPA, days post anthesis) following this chart:

  • Expanding [fruit] stage (harvested at 5, 10, 20, or 30 DPA)
  • Mature Green stage (full-size green fruit, ≈ 39 DPA),
  • Breaker stage (definite break in color from green to tannish-yellow with less than 10% of the surface, ≈ 42 DPA),
  • Pink stage (50% pink or red color, ≈ 44 DPA),
  • Light red stage (100% light red, ≈ 46 DPA),
  • Red ripe stage (full red for 8 days, ≈ 50 DPA).

(simplified from the Methods section, p. 10, see pic)

130-shinozaki - Copy
Fig. 1 (partial from Shinozaki et al., 2018). A tissue/cell-based transcript profiling of developing tomato fruit. a Traced image of six targeted fruit tissues. Shaded areas of the total pericarp and the placenta were not harvested. b Traced image of five pericarp cells. c Representative pictures of harvested fruit spanning ten developmental stages. d Representative pictures of the stylar end of MG and Br stage fruit. DPA, days post anthesis; MG, mature green; Br, breaker; Pk, pink; LR, light red; RR, red ripe. Credit: DOI: 10.1038/s41467-017-02782-9. License: CC BY 4.0 IL.

Immediately after harvesting, the tomato was scanned with a micro-computed tomograph (micro-CT) to generate a 3D image of the fruit, including its internal structures. Then, the fruit was dissected by hand or laser, depending of its size, divided into various tissue types and then preserved either via snap-freezing in liquid nitrogen or standard tissue fixation for light or transmission electron microscopy. Finally, the researchers used kits to extract and analyze the RNA from their samples. And, last but not least, a lot of math & stats.

This is what I got out of it:

  1. A total of 24,660 genes were uniquely expressed in various tomato cell types and at various stages of development.
  2. The tomato ripens from within, meaning from the interior to the exterior and not the other way around.
  3. The ripening seems to be a continuous process, starting before the ‘Breaker’ stage.
  4. The ripening signals originate in the locular tissues (the goo around the seeds; it’s possible that the seeds themselves send the signals to the locular tissue to start the ripening process).
  5. The flesh of the fruit is only one part of the tomato and the most investigated, but the other types of tissue are also important. For example, some genes responsible for aroma and flavor (CTOMT1, TOMLOXC) are predominantly or even exclusively expressed in the flesh, but some genes that improve the nutritional value (SlGAD3) are expressed mostly in the placenta.
  6. The fruit can do photosynthesis, probably for the benefit of its seeds.
  7. Each developmental stage is characterized by a distinct transcriptome profile (by inference, also a distinct proteomic profile, although not necessarily in exact correspondence)
  8. Botany, like any serious science, is complicated.

Ah, I have been vindicated. By science, nonetheless! You see, in my pursuit to recapture the tomato taste of my childhood I sample various homegrown exemplars of Solanum lycopersicum derived both from more or less failed personal attempts with pots on the balcony and from various farmer’s market vendors. While I can understand – though not approve of – the industrial scale agro-growers’ practice to pick the tomatoes green, unripe and then artificially injecting them with ethylene to prolong shelf life, I completely fail to understand the picking them up when green by the sellers in the farmer’s markets. I had many surreal conversations with such vendors (I cannot call them farmers for the life of me) who more than once attempted to reassure me that 1) Everybody’s picking tomatoes green off the vine because that’s how it’s done and 2) Ripening happens on the window sill. In vain have I tried to explain the difference between ripen and rotten; in vain have I pointed out that color is only one indicator of ripening; in vain did I explain that during ripening on the vine the plant delivers certain substances to the fruit that lead to changes in the flesh composition to make it more nutritious for the future seedling, process that the aforesaid window sill does nor partake in. Alas, ultimately, my arguments (and my family’s last 400 years of farming experience) hit the wall of “I am growing tomatoes for three years now and I know what I’m doing. Are you buying or not?” As you might imagine, I end up going home frustrated and yet staring at some exorbitantly expensive and looking as sad as I feel greenish tomatoes.

For me, this is what Shinozaki et al. (2018) validated: Ripening is a complex process that involves a lot of physiological changes in the fruit, not merely some extra production of ethylene that can be conveniently supplied externally by a syringe or rotting on the window sill. Of course, there is nowhere in the paper that Shinozaki et al. (2018) say that. What they do say is this: “The ripening program is revealed as comprising gradients of gene expression, initiating in internal tissues then radiating outward, and basipetally along a latitudinal axis. We also identify spatial variations in the patterns of epigenetic control superimposed on ripening gradients” (Abstract). Tomayto, tomahto…

Now we know that… simply put, I’m right. Sometimes is good to be right. I am old enough to prefer happiness and tranquility over rightness & righteousness, but still young enough that sometimes, just sometimes, it feels good to be right. Yes, the Shinozaki et al. (2018) paper exists only for my vindication in my farmer’s market squabbles and not for providing a huge comprehensive atlas on the tomato transcriptome, along with an awesome spatiotemporal map showing the place and time of the expression of genes responsible for fruit ripening, quality traits and so on.

Good job, Shinozaki et al. (2018)!

130-tomato - Copy

REFERENCE: Shinozaki Y, Nicolas P, Fernandez-Pozo N, Ma Q, Evanich DJ, Shi Y, Xu Y, Zheng Y, Snyder SI, Martin LBB, Ruiz-May E, Thannhauser TW, Chen K, Domozych DS, Catalá C, Fei Z, Mueller LA, Giovannoni JJ, & Rose JKC (25 Jan 2018). High-resolution spatiotemporal transcriptome mapping of tomato fruit development and ripening. Nature Communications, 9(1):364. PMID: 29371663, PMCID: PMC5785480, DOI: 10.1038/s41467-017-02782-9. ARTICLE | FREE FULLTEXT PDF | The Tomato Expression Atlas database

By Neuronicus, 7 February 2018

Interview with Jason D. Shepherd, PhD

During the first week of the publication, a Cell paper that I covered a couple of weeks ago has received a lot of attention from media outlets, like The Atlantic, Scicasts and Neuroscience News/University of Utah Press Release. It is not my intention to duplicate here their wonderfully done summaries and interviews; rather to provide answers to some geeky questions arisen from the minds of nerdy scientists like me.

Dr. Shepherd, you are the corresponding author of a paper published on Jan. 11 in Cell about a protein heavily involved in memory formation, called Arc. Your team and another team from University of Massachusetts, who published in the same issue of Cell, simultaneously discovered that Arc looks like and behaves like a virus. The protein “infects” nearby cells, in this case neurons, with instructions of how to make more of itself, i.e. it shuttles its own mRNA from one cell to another.

Neuronicus: Why is this discovery so important?

​Jason D. Shepherd: I think there’s a couple of big implications of this work:

  1. ​The so called “junk” DNA in our genomes that come from viruses and transposable elements actually provide source material for new genes. Arc isn’t the first example, but it’s the first prominent brain gene to have these kinds of origins.

  2. This is the first demonstration that cellular proteins are capable of assembling into capsid-like structures. This is a completely new way of thinking about communication between cells.

  3. We think there may be other genes that can also form capsids, suggesting this method of signaling is fairly common in organisms.

N: 2) When you and your colleagues compared Arc’s genetic sequence across species you concluded Arc comes from a virus that infected four-legged animals some time ago. A little time later the virus infected the flies too. When did these events occur?

​JDS: So we think the origins are from a retrotransposon not a virus. These are DNA sequences or elements that “jump” into the host genome. Think of them as primitive viruses. Indeed, these elements are thought to be the ancestors of retroviruses like HIV. The mammalian Arc gene seems to have originated ~400 million years ago, the fly about 150 million years ago. ​

N: 3) So, if Arc has been so successfully repurposed by the tetrapod and fly ancestors to add memory formation, what does that mean for the animals and insects before the infection? I understand that we move now in the realm of speculation, but who better to speculate on these things than the people who work on Arc? The question is: did these pre-infection creatures have bad and short memories? The alternate view would be that they had similar memory abilities due to a different mechanism that was replaced by Arc. Which one do you think is more likely?

​JDS: Good question. It’s certainly the case that memory capacity improved in tetrapods, but unclear if Arc is the sole reason. I suspect that Arc confers some unique aspects to brains, otherwise it would not have been so conserved after the initial insertion event, but I also think there are probably other Arc-like genes in other organisms that do not have Arc. I will also note that we are not even sure, yet, that the fly Arc is important for fly memory/learning.

N: 4) Remaining in the realm of speculation, if this intercellular mRNA transport proves to be ubiquitous for a variety of mRNAs, what does that say of the transcriptome of a cell at any given time? From a practical point of view, a cell is what is made off, meaning the ensemble of all its enzymes and proteins and so on, collectively termed transcriptome. So if a cell can just alter its neighbor’s transcriptome array, does that mean that it’s possible to alter also its function? Even more outrageously speculative, perhaps even its type? Can we make cancer cells commit suicide by shooting Arc capsules of mRNA at them?

​JDS: Yes! Cool ideas. I think this is quite likely, that these signaling extracellular vesicles can dramatically alter the state of a cell. We are obviously looking into this. ​

N: 5) Finally, in the paper, the Arc capsules containing mRNA are referred to as ACBAR (Arc Capsid Bearing Any RNA). At first I thought it was a reference to “Allahu akbar” which is Arabic for ‘God is greatest’, the allusion being “ACBAR! Our exosome is the greatest!” or “Arc Acbar! Our Arc is the greatest!”. Is this where the naming is coming from?

​JDS: No no. As I said on twitter, my lab came up with this acronym because we are all Star Wars nerds and the classic “It’s a trap!” line from general Ackbar seemed apt for something that was trapping RNA. ​

Below is the Twitter exchange Dr. Shepherd refers to:

129shepherd - Copy

Dr. Shepherd, thank you for your time! And congratulations on a well done paper and a well told story. Your Methods section is absolutely great; anybody can follow the instructions and replicate your data. Somebody in your lab must have kept great records. Congratulations again!

129_starwars - Copy
The ACBAR graphic is from the Cell’s abstract (©2017 Elsevier Inc.) but since it’s for comedic purposes, I’d say is fair use. Same for the Lego Ackbar.

By Neuronicus, 28 January 2018

P. S. Since I have obviously managed to annoy the #StarWars universe and twitterverse because I depicted General Ackbar using a Jedi sword when he’s not a Jedi, I thought only fair to annoy the other half of the world, the #trekkies. So here you go:

129_startrek - Copy.jpg

 

Arc: mRNA & protein from one neuron to another

EDIT 1 [Jan 17, 2018]: I promised four days ago that I will post this, while it was still hot, but my Internet was down, thanks to the only behemoth provider in USA. And rated the worst company in the Nation, too. You definitely know by now about whom I’m talking about. Grrrr…  Anyway, here is the paper:

As promised, today’s paper talks about mRNA transfer between neurons.

Pastuzyn et al. (2018) looked at the gene Arc in neurons because they thought its Gag sequence looks suspiciously similar to some retroviruses. Could it be possible that it also behaves like a virus?

Arc is heavily involved in the immune system, is essential for the formation of long-term memories, and is involved in all sorts of diseases, like schizophrenia and Alzheimer’s, among other things.

Pastuzyn et al. (2018) is a relatively long and dense paper, albeit well written. So, I thought that this time, instead of giving you a summary of their research it would be better to give you the authors’ story directly in their own words written as subtitles in the Results section (bold letters – the authors words, normal font – mine). Warning: this is a much more jargon-dense blog post than my previous one on the same topic and, because it is so much material, I will not explain every term.

  • Fly and Tetrapod (us) Arc Genes Independently Originated from Distinct Lineages of Ty3/gypsy Retrotransposons, the phylogenomic analyses tell us, meaning the authors have done a lot of computer-assisted comparisons of similar forms of the gene in hundreds of species.
  • Arc Proteins Self-Assemble into Virus-like Capsids. Arc likes to oligomerize spontaneously (dimers and trimers). The oligomers resemble virus-like capsids, similar to HIV.
  • Arc Binds and Encapsulates RNA. Although it loves its own RNA about 10 times more than other RNAs, it’s a promiscuous protein (doesn’t care which RNA as long as it follows the rules of stoichiometry). Arc capsids encapsulate both the Arc protein (maybe other proteins too?), its mRNA, and whatever mRNA happened to be in the vicinity at the time of encapsulation. Arc capsids are able to protect the mRNA from RNAases.
  • Arc Capsid Assembly Requires RNA. If there is no RNA around, the capsids are few and poorly formed.
  • Arc Protein and Arc mRNA Are Released by Neurons in Extracellular Vesicles. Arc capsid packages Arc protein & Arc mRNA into extracellular vesicles (EV). The size of these EVs is < 100nm, putting them in the exosome category. This exosome, which the authors gave the unfortunate name of ACBAR (Arc Capsid Bearing Any RNA), is being expelled from cortical neurons in an activity-dependent manner. In other words, when neurons are stimulated, they release ACBARs.
  • Arc Mediates Intercellular Transfer of mRNA in Extracellular Vesicles. ACBARs dock to the host cell and then undergo clathrin-dependent endocytosis, meaning they expel their cargo in the host cell. The levels of Arc protein and Arc mRNA peaks in a host hippocampal cell in four hours from incubation. The ACBARs tend to congregate around donor cell’s dendrites.
  • Transferred Arc mRNA Can Undergo Activity-Dependent Translation. Activating the group 1 metabotropic glutamate receptor (mGluR1/5) by application of the agonist DHPG induces a significant increase of the amount of Arc protein in the host neurons.

This is a veritable tour de force paper. The Results section has 7 sub-sections, each with multiple experiments to dot every i and cross every t. I’m eyeballing about 40 experiments. It is true that there are 13 authors on the paper from different institutions – yeay for collaboration! – but c’mon! Is this what you need to get in Cell these days? Apparently so. Don’t get me wrong, this is an outstanding paper. But in the end it is still only one paper, which means only one first author. The rest are there for the ride because for a tenure track application nobody cares about your papers in CNS (Cell, Nature, Science = The Central Nervous System of the scientific community, har, har) if you’re not the first author. It looks like the increasing amount of work you need to be published in top tier journals these days is becoming a pet peeve of mine as I keep mentioning it (for example, here).

My pet peeves aside, Pastuzyn et al. (2018) is an excellent paper that opens interesting practical (drug delivery) and theoretical (biological repurpose of ancient invaders) gates. Kudos!

128-1 - Copy

REFERENCE: Pastuzyn ED, Day CE, Kearns RB, Kyrke-Smith M, Taibi AV, McCormick J, Yoder N, Belnap DM, Erlendsson S, Morado DR, Briggs JAG, Feschotte C, & Shepherd JD. (11 Jan 2018). The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer. Cell, 172(1-2):275-288.e18. PMID: 29328916. doi: 10.1016/j.cell.2017.12.024. ARTICLE | FULLTEXT PDF via ResearchGate

P.S. I said that ACBAR is an unfortunate acronym because I don’t know about you but I for one wouldn’t want my discovery to be linked either with a religion or with terrorist cries, even if that link is done only by a small fraction of the population. Although I can totally see the naming-by-committee going: “ACBAR! Our exosome is the greatest! Yeay!” or “Arc Acbar! Our Arc is the greatest. Double yeay!”. On a second thought, it’s kindda nerdy geeky neat. I still wouldn’t have done it though…

By Neuronicus, 14 January 2018

EDIT 2 [Jan 22, 2018]: There is another paper that discovered that Arc forms capsids that encapsulate RNA and then shuttles it across the neuromuscular junction in Drosophila (fly). To their credit, Cell published both these papers back-to-back so no researcher gets scooped of their discovery. From what I can see, the discovery really happened simultaneously, so I modified my infopic to reflect that (both papers were submitted in January 2017, received in revised version on August 15, 2017 and published in the same issue on January 11, 2018). Here is the reference to the other article:

Ashley J, Cordy B, Lucia D, Fradkin LG, Budnik V, & Thomson T (11 Jan 2018). Retrovirus-like Gag Protein Arc1 Binds RNA and Traffics across Synaptic Boutons, Cell. 172(1-2): 262-274.e11. PMID: 29328915. doi: 10.1016/j.cell.2017.12.022. ARTICLE

EDIT 3 [Jan 29, 2018]: Dr. Shepherd, the last author of the paper I featured, was kind enough to answer a few of my questions about the implications of his and his team’s findings, answers which you will find here.

By Neuronicus, 22 January 2018

The FIRSTS: mRNA from one cell can travel to another cell and be translated there (2006)

I’m interrupting the series on cognitive biases (unskilled-and-unaware, superiority illusion, and depressive realism) to tell you that I admit it, I’m old. -Ish. Well, ok, I’m not that old. But this following paper made me feel that old. Because it invalidates some stuff I thought I knew about molecular cell biology. Mind totally blown.

It all started with a paper freshly published two days ago and that I’ll cover tomorrow. It’s about what the title says: mRNA can travel between cells packaged nicely in vesicles and once in a target cell can be made into protein there. I’ll explain – briefly! – why this is such a mind-blowing thing.

Central_dogma - Copy
Fig. 1. Illustration of the central dogma of biology: information transfer between DNA, RNA, and protein. Courtesy of Wikipedia, PD

We’ll start with the central dogma of molecular biology (specialists, please bear with me): the DNA is transcribed into RNA and the RNA is translated into protein (see Fig. 1). It is an oversimplification of the complexity of information flow in a biological system, but it’ll do for our purposes.

DNA needs to be transcribed into RNA because RNA is a much more flexible molecule and thus can do many things. So RNA is the traveling mule between DNA and the place where its information becomes protein, i.e. ribosome. Hence the name mRNA. Just kidding; m stands for messenger RNA (not that I will ever be able to call that ever again: muleRNA is stuck in my brain now).

There are many kinds of RNA: some don’t even get out of the nucleus, some are chopped and re-glued (alternative splicing), some decide which bits of DNA (genes) are to be expressed, some are busy housekeepers and so on. Once an RNA has finished its business it is degraded in many inventive ways. It cannot leave the cell because it cannot cross the cell membrane. And that was that. Or so I’ve been taught.

Exceptions from the above were viruses whose ways of going from cell to cell are very clever. A virus is a stretch of nucleic acids (DNA and/or RNA) and some proteins encapsulated in a blob (capsid). Not a cell!

In the ’90s several groups were looking at some blobs (yes, most stuff in biology can be defined by the all-encompassing and enlightening term of ‘blob’) that cells spew out every now and then. These were termed extracellular vesicles (EV) for obvious reasons. Turned out that many kinds of cells were doing it and on a much more regular basis than previously thought. The contents of these EVs varied quite a bit, based on the type of cells studied. Proteins, mostly, and maybe some cytoplasmic debris. In the ’80s it was thought that this was one way for a cell to get rid of trash. But in 1982, Stegmayr & Ronquist showed that prostate cells release some EVs that result in sperm cell motility increase (Raposo & Stoorvogel, 2013) so, clearly, the EVs were more than trash. Soon it became evident that EVs were another way of cell-to-cell communication. (Note to self: the first time intercellular communication by EVs was demonstrated was in 1982, Stegmayr & Ronquist. Maybe I’ll dig out the paper to cover it sometime).

So. In 2005, Baj-Krzyworzeka et al. (2006) looked at some human cancer cells to see what they spew out and for what purpose. They saw that the cancer cells were transferring some of the tumor proteins packaged in EVs to monocytes. For devious purposes, probably. And then they made to what it looks to me like a serious leap in reasoning: since the EVs contain tumor proteins, why wouldn’t they also contain the mRNA for those proteins? My first answer to that would have been: “because it would be rapidly degraded”. And I would have been wrong. To my credit, if the experiment wouldn’t take up too many resources I still would have done it, especially if I would have some random primers lying around the lab. Luckily for the world, I was not in charge with this particular experiment and Baj-Krzyworzeka et al. (2005) proceeded with a real-time PCR (polymerase chain reaction) which showed them that the EVs released by the tumor cells also contained mRNA.

Now the 1 million dollar, stare-in-your-face question was: is this mRNA functional? Meaning, once delivered to the host cell, would it be translated into protein?

Six months later the group answered it. Ratajcza et al. (2006) used embryonic stem cells as the donor cells and hematopoietic progenitor cells as host cells. First, they found out that if you let the donors spit EVs at the hosts, the hosts are faring much better (better survival, upregulated good genes, phosphorylated MAPK to induce proliferation etc.). Next, they looked at the contents of EVs and found out that they contained proteins and mRNA that promote those good things (Wnt-3 protein, mRNA for transcription factors etc.). Next, to make sure that the host cells don’t show this enrichment all of a sudden out of the goodness of their little pluripotent hearts but is instead due to the mRNA from the donor cells, the authors looked at the expression of one of the transcription factors (Oct-4) in the hosts. They used as host a cell line (SKL) that does not express the pluripotent marker Oct-4. So if the hosts express this protein, it must have come only from outside. Lo and behold, they did. This means that the mRNA carried by the EVs is functional (Fig. 2).

128-1 - Copy
Fig. 2. Cell-to-cell mRNA transfer via extracellular vesicles (EVs). DNA is translated into RNA. A portion of RNA is transcribed into protein and another portion remains untranscribed. Both resultant protein and mRNA can get packaged into a vesicle: either a repackage into a microvesicle (a budding off of the cell membrane that shuttles cargo to and forth, about the size of 100-300nm) or packaged in a newly formed exosome (<100 nm) inside a multivesicular endosome (the yellow circle). The cell releases these vesicles in the intercellular space. The vesicles dock onto the host cell’s membrane and empty their cargo.

What bugs me is that these papers came out in a period where I was doing some heavy reading. How did I miss this?! Probably because they were published in cancer journals, not my field. But this is big enough you’d think others would mention it. (If you’re a recurrent reader of my blog, by now you should be familiarized with my stream-of-consciousness writing and my admittedly sometimes annoying in-parenthesis-meta-cognitions :D). So how did I miss this? How many more great discoveries have I missed? Am I the only one to discover such fundamental gaps in my knowledge? And thus the imposter syndrome takes root.

Just kidding, I don’t have the imposter syndrome. If anything, I got a superiority illusion complex. And I am absolutely sure that many, many scientists read things they consider fundamental to their way of thinking about the world all the time and wonder what other truly great discoveries are out there already that they missed.

Frankly, I should probably be grateful to this blog – and my friend GT who made me do it – because without nosing outside my field in search of material for it I would have probably remained ignorant of this awesome discovery. So, even if this is a decade old discovery for you, for me is one day old and I am a bit giddy about it.

This is a big deal because of the theoretical implications: a cell’s transcriptome (all the mRNA expressed in a cell) varies not only due to its needs, activity, and experiences, but also due to its neighbors’! A cell is, more or less, its transcriptome. Soooo… if we can change that at will, does that means we can change the type or function of the cell too? There are so many questions that such a discovery raises! And possibilities.

This is also a big deal because it opens up not a new therapy, or a new therapy direction, or a new drug class, but a new DELIVERY METHOD, the Holy Grail of Pharmacopeia. You just put your drug in one of these vesicles and let nature take its course. Of course, there are all sorts of roadblocks to overcome, like specificity, toxicity, etc. Looks like some are already conquered as there are several clinical trials out there that take advantage of this mechanism and I bet there will be more.

Stop by tomorrow for a freshly published paper on this mechanism in neurons.

127 - Copy

REFERENCES:

1) Baj-Krzyworzeka M, Szatanek R, Weglarczyk K, Baran J, Urbanowicz B, Brański P, Ratajczak MZ, & Zembala M. (Jul. 2006, Epub 9 Nov 2005). Tumour-derived microvesicles carry several surface determinants and mRNA of tumour cells and transfer some of these determinants to monocytes. Cancer Immunology, Immunotherapy, 55(7):808-818. PMID: 16283305, DOI: 10.1007/s00262-005-0075-9. ARTICLE

2) Ratajczak J, Miekus K, Kucia M, Zhang J, Reca R, Dvorak P, & Ratajczak MZ (May 2006). Embryonic stem cell-derived microvesicles reprogram hematopoietic progenitors: evidence for horizontal transfer of mRNA and protein delivery. Leukemia, 20(5):847-856. PMID: 16453000, DOI: 10.1038/sj.leu.2404132. ARTICLE | FREE FULLTEXT PDF 

Bibliography:

Raposo G & Stoorvogel W. (18 Feb. 2013). Extracellular vesicles: exosomes, microvesicles, and friends. The Journal of Cell Biology, 200(4):373-383. PMID: 23420871, PMCID: PMC3575529, DOI: 10.1083/jcb.201211138. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 13 January 2018

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCES:

1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is correct only in certain circumstances. Because only in particular conditions this illusion of control applies and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017