The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCES:

1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is correct only in certain circumstances. Because only in particular conditions this illusion of control applies and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

The FIRSTS: Dinosaurs and reputation (1842)

‘Dinosaur’ is a common noun in most languages of the Globe and, in its weak sense, it means “extinct huge reptile-like animal that lived a long-time ago”. The word has been in usage for so long that it can be used also for describing something “impractically large, out-of-date, or obsolete” (Merriam-Webster dictionary). “Dinosaur” is a composite of two ancient Greek words (“deinos”, “sauros”) and it means “terrible lizard”.

But, it turns out that the word hasn’t been in usage for so long, just for a mere 175 years. Sir Richard Owen, a paleontologist that dabbled in many disciplines, coined the term in 1842. Owen introduced the taxon Dinosauria as if it was always called thus, no fuss: “The present and concluding part of the Report on British Fossil Reptiles contains an account of the remains of the Crocodilian, Dinosaurian, Lacertian, Pterodactylian, Chelonian, Ophidian and Batrachian reptiles.” (p. 60). Only later in the Report does he tell us his paleontological reasons for the baptism, namely some anatomical features that distinguish dinosaurs from crocodiles and other reptiles.

“…The combination of such characters, some, as the sacral ones, altogether peculiar among Reptiles, others borrowed, as it were, from groups now distinct from each other, and all manifested by creatures far surpassing in size the largest of existing reptiles, will, it is presumed, be deemed sufficient ground for establishing a distinct tribe or sub-order of Saurian Reptiles, for which I would propose the name of Dinosauria.” (p.103)

At the time he was presenting this report to the British Association for the Advancement of Science, other giants of biology were running around the same halls, like Charles Darwin and Thomas Henry Huxley. Indisputably, Owen had a keen observational eye and a strong background in comparative anatomy that resulted in hundreds of published works, some of them excellent. That, in addition to establishing the British Museum of Natural History.

Therefore, Owen had reasons to be proud of his accomplishments and secure in his influence and legacy, and yet his contemporaries tell us that he was an absolutely vicious man, spiteful to the point of obsession, vengeful and extremely jealous of other people’s work. Apparently, he would steal the work of the younger people around him, never give credit, lie and cheat at every opportunity, and even write lengthy anonymous letters to various printed media to denigrate his contemporaries. He seemed to love his natal city of Lancaster and his family though (Wessels & Taylor, 2015).

121Richard-owen _PD
Sir Richard Owen (20 July 1804 – 18 December 1892). PD, courtesy of Wikipedia.

Owen had a particular hate for Darwin. They had been close friends for 20 years and then Darwin published the “Origin of Species”. The book quickly became widely read and talked about and then poof: vitriol and hate. Darwin himself said the only reason he could think of for Owen’s hatred was the popularity of the book.

Various biographies and monographers seem to agree on his unpleasant personality (see his entry in The Telegraph, Encyclopedia.com, Encylopaedia Britannica, BBC). On a side note, should you be concerned about your legacy and have the means to persuade The Times to write you an obituary, by all means, do so. In all the 8 pages of obituary written in 1896 you will not find a single blemish on the portrait of Sir Richard Owen.

This makes me ponder on the judgement of history based not on your work, but on your personality. As I said, the man contributed to science in more ways than just naming the dinosaur and having spats with Darwin. And yet it seems that his accomplishments are somewhat diminished by the way he treated others.

This reminded me of Nicolae Constantin Paulescu, a Romanian scientist who discovered insulin in 1916 (published in 1921). Yes, yes, I know all about the controversy with the Canadians that extracted and purified the insulin in 1922 and got the Nobel for it in 1923. Paulescu did the same, even if Paulescu’s “pancreatic extract” from a few years earlier was insufficiently purified; it still successfully lowered the glicemic index in dogs. He even obtained a patent for the “fabrication of pancrein” (his name for insulin, because he obtained it from the pancreas) in April 1922 from the Romanian Government (patent no. 6255). The Canadian team was aware of his work, but because it was published in French, they had a poor translation and they misunderstood his findings, so, technically, they didn’t steal anything. Or so they say. Feel free to feed the conspiracy mill. I personally don’t know, I haven’t looked at the original work to form an opinion because it is in French and my French is non-existent.

Annnywaaaay, whether or not Paulescu was the first in discovering the insulin is debatable, but few doubt that he should have shared the Nobel at least.

Rumor has it that Paulescu did not share the Nobel because he was a devout Nazi. His antisemitic writings are remarkably horrifying, even by the standards of the extreme right. That’s also why you won’t hear about him in medical textbooks or at various diabetes associations and gatherings. Yet millions of people worldwide may be alive today because of his work, at least partly.

How should we remember? Just the discoveries and accomplishments with no reference to the people behind them? Is remembering the same as honoring? “Clara cells” were lung cells discovered by the infamous Nazi anatomist Max Clara by dissecting prisoners without consent. They were renamed by the lung community “club cells” in 2013. We cannot get rid of the discovery, but we can rename the cells, so it doesn’t look like we honor him. I completely understand that. And yet I also don’t want to lose important pieces of history because of the atrocities (in the case of Nazis) or unsavory behavior (in the case of Owen) committed by our predecessors. I understand why the International Federation of Diabetes does not wish to give awards in the name of Paulescu or have a Special Paulescu lecture. Perhaps the Romanians should take down his busts and statues, too. But I don’t understand why (medical) history books should exclude him.

In other words, don’t honor the unsavories of history, but don’t forget them either. You never know what we – or the future generations – may learn by looking back at them and their actions.

123 - Copy.jpg

By Neuronicus, 19 October 2017

References:

1) Owen, R (1842). “Report on British Fossil Reptiles”. Part II. Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. p. 60–204. Google Books Fulltext 

2) “Eminent persons: Biographies reprinted from the Times, Vol V, 1891–1892 – Sir Richard Owen (Obituary)” (1896). Macmillan & Co., p. 291–299. Google Books Fulltext

3) Wessels Q & Taylor AM (28 Oct 2015). Anecdotes to the life and times of Sir Richard Owen (1804-1892) in Lancaster. Journal of Medical Biography. pii: 0967772015608053. PMID: 26512064, DOI: 10.1177/0967772015608053. ARTICLE

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials who, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it ain’t that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Old chimpanzees get Alzheimer’s pathology

Alzheimer’s Disease (AD) is the most common type of dementia with a progression that can span decades. Its prevalence is increasing steadily, particularly in the western countries and Australia. So some researchers speculated that this particular disease might be specific to humans. For various reasons, either genetic, social, or environmental.

A fresh e-pub brings new evidence that Alzheimer’s might plague other primates as well. Edler et al. (2017) studied the brains of 20 old chimpanzees (Pan troglodytes) for a whole slew of Alzheimer’s pathology markers. More specifically, they looked for these markers in brain regions commonly affected by AD, like the prefrontal cortex, the midtemporal gyrus, and the hippocampus.

Alzheimer’s markers, like Tau and Aβ lesions, were present in the chimpanzees in an age-dependent manner. In other words, the older the chimp, the more severe the pathology.

Interestingly, all 20 animals displayed some form of Alzheimer’s pathology. This finding points to another speculation in the field which is: dementia is just part of normal aging. Meaning we would all get it, eventually, if we would live long enough; some people age younger and some age older, as it were. This hypothesis, however, is not favored by most researchers not the least because is currently unfalsifiable. The longest living humans do not show signs of dementia so how long is long enough, exactly? But, as the authors suggest, “Aβ deposition may be part of the normal aging process in chimpanzees” (p. 24).

Unfortunately, “the chimpanzees in this study did not participate in formal behavioral or cognitive testing” (p. 6). So we cannot say if the animals had AD. They had the pathological markers, yes, but we don’t know if they exhibited the disease as is not uncommon to find these markers in humans who did not display any behavioral or cognitive symptoms (Driscoll et al., 2006). In other words, one might have tau deposits but no dementia symptoms. Hence the title of my post: “Old chimpanzees get Alzheimer’s pathology” and not “Old chimpanzees get Alzheimer’s Disease”

Good paper, good methods and stats. And very useful because “chimpanzees share 100% sequence homology and all six tau isoforms with humans” (p. 4), meaning we have now a closer to us model of the disease so we can study it more, even if primate research has taken significant blows these days due to some highly vocal but thoroughly misguided groups. Anyway, the more we know about AD the closer we are of getting rid of it, hopefully. And, soon enough, the aforementioned misguided groups shall have to face old age too with all its indignities and my guess is that in a couple of decades or so there will be fresh money poured into aging diseases research, primates be damned.

121-chimps get Alz - Copy

REFERENCE: Edler MK, Sherwood CC, Meindl RS, Hopkins WD, Ely JJ, Erwin JM, Mufson EJ, Hof PR, & Raghanti MA. (EPUB July 31, 2017). Aged chimpanzees exhibit pathologic hallmarks of Alzheimer’s disease. Neurobiology of Aging, PII: S0197-4580(17)30239-7, DOI: http://dx.doi.org/10.1016/j.neurobiolaging.2017.07.006. ABSTRACT  | Kent State University press release

By Neuronicus, 23 August 2017

Save

Save

Midichlorians, midichloria, and mitochondria

Nathan Lo is an evolutionary biologist interested in creepy crawlies, i.e. arthropods. Well, he’s Australian, so I guess that comes with the territory (see what I did there?). While postdoc’ing, he and his colleagues published a paper (Sassera et al., 2006) that would seem boring for anybody without an interest in taxonomy, a truly under-appreciated field.

The paper describes a bacterium that is a parasite for the mitochondria of a tick species called Ixodes ricinus, the nasty bugger responsible for Lyme disease. The authors obtained a female tick from Berlin, Germany and let it feed on a hamster until it laid eggs. By using genetic sequencing (you can use kits these days to extract the DNA, do PCR, gels and cloning, pretty much everything), electron microscopy (real powerful microscopes) and phylogenetic analysis (using computer softwares to see how closely related some species are) the authors came to the conclusion that this parasite they were working on is a new species. So they named it. And below is the full account of the naming, from the horse’s mouth, as it were:

“In accordance with the guidelines of the International Committee of Systematic Bacteriology, unculturable bacteria should be classified as Candidatus (Murray & Stackebrandt, 1995). Thus we propose the name ‘Candidatus Midichloria mitochondrii’ for the novel bacterium. The genus name Midichloria (mi.di.chlo′ria. N.L. fem. n.) is derived from the midichlorians, organisms within the fictional Star Wars universe. Midichlorians are microscopic symbionts that reside within the cells of living things and ‘‘communicate with the Force’’. Star Wars creator George Lucas stated that the idea of the midichlorians is based on endosymbiotic theory. The word ‘midichlorian’ appears to be a blend of the words mitochondrion and chloroplast. The specific epithet, mitochondrii (mi.to′chon.drii. N.L. n. mitochondrium -i a mitochondrion; N.L. gen. n. mitochondrii of a mitochondrion), refers to the unique intramitochondrial lifestyle of this bacterium. ‘Candidatus M. mitochondrii’ belongs to the phylum Proteobacteria, to the class Alphaproteobacteria and to the order Rickettsiales. ‘Candidatus M. mitochondrii’ is assigned on the basis of the 16S rRNA (AJ566640) and gyrB gene sequences (AM159536)” (p. 2539).

George Lucas gave his blessing to the Christening (of course he did).

119-midi - Copy1 - Copy.jpg

Acknowledgements: Thanks go to Ms. BBD who prevented me from making a fool of myself (this time!) on the social media by pointing out to me that midichloria are real and that they are a mitochondrial parasite.

REFERENCE: Sassera D, Beninati T, Bandi C, Bouman EA, Sacchi L, Fabbi M, Lo N. (Nov. 2006). ‘Candidatus Midichloria mitochondrii’, an endosymbiont of the tick Ixodes ricinus with a unique intramitochondrial lifestyle. International Journal of Systematic and Evolutionary Microbiology, 56(Pt 11): 2535-2540. PMID: 17082386, DOI: 10.1099/ijs.0.64386-0. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 29 July 2017

Pic of the day: Skunky beer

120 skunky beer - Copy

 

REFERENCE: Burns CS, Heyerick A, De Keukeleire D, Forbes MD. (5 Nov 2001). Mechanism for formation of the lightstruck flavor in beer revealed by time-resolved electron paramagnetic resonance. Chemistry – The European Journal, 7(21): 4553-4561. PMID: 11757646, DOI: 10.1002/1521-3765(20011105)7:21<4553::AID-CHEM4553>3.0.CO;2-0. ABSTRACT

By Neuronicus, 12 July 2017

The FIRSTS: Increase in CO2 levels in the atmosphere results in global warming (1896)

Few people seem to know that although global warming and climate change are hotly debated topics right now (at least on the left side of the Atlantic) the effect of CO2 levels on the planet’s surface temperature was investigated and calculated more than a century ago. CO2 is one of the greenhouse gases responsible for the greenhouse effect, which was discovered by Joseph Fourier in 1824 (the effect, that is).

Let’s start with a terminology clarification. Whereas the term ‘global warming’ was coined by Wallace S. Broecker in 1975, the term ‘climate change’ underwent a more fluidic transformation in the ’70s from ‘inadvertent climate modification’ to ‘climatic change’ to a more consistent use of ‘climate change’ by Jule Charney in 1979, according to NASA. The same source tells us:

“Global warming refers to surface temperature increases, while climate change includes global warming and everything else that increasing greenhouse gas amounts will affect”.

But before NASA there was one Svante August Arrhenius (1859–1927). Dr. Arrhenius was a Swedish physical chemist who received the Nobel Prize in 1903 for uncovering the role of ions in how electrical current is conducted in chemical solutions.

S.A. Arrhenius was the first to quantify the variations of our planet’s surface temperature as a direct result of the amount of CO2 (which he calls carbonic acid, long story) present in the atmosphere. For those – admittedly few – nitpickers that say his views on the greenhouse effect were somewhat simplistic and his calculations were incorrect I’d say cut him a break: he didn’t have the incredible amount of data provided by the satellites or computers, nor the work of thousands of scientists over a century to back him up. Which they do. Kind of. Well, the idea, anyway, not the math. Well, some of the math. Let me explain.

First, let me tell you that I haven’t managed to pass past page 3 of the 39 pages of creative mathematics, densely packed tables, parameter assignments, and convoluted assumptions of Arrhenius (1896). Luckily, I convinced a spectroscopist to take a crack at the original paper since there is a lot of spectroscopy in it and then enlighten me.

118Boltzmann-grp - Copy
The photo was taken in 1887 and shows (standing, from the left): Walther Nernst (Nobel in Chemistry), Heinrich Streintz, Svante Arrhenius, Richard Hiecke; (sitting, from the left): Eduard Aulinger, Albert von Ettingshausen, Ludwig Boltzmann, Ignaz Klemenčič, Victor Hausmanninger. Source: Universität Graz. License: PD via Wikimedia Commons.

Second, despite his many accomplishments, including being credited with laying the foundations of a new field (physical chemistry), Arrhenius was first and foremost a mathematician. So he employed a lot of tedious mathematics (by hand!) together with some hefty guessing along with what was known at the time about Earth’s infrared radiation, solar radiation, water vapor and CO2 absorption, temperature of the Moon,  greenhouse effect, and some uncalibrated spectra taken by his predecessors to figure out if “the mean temperature of the ground [was] in any way influenced by the presence of the heat-absorbing gases in the atmosphere” (p. 237). Why was he interested in this? We find out only at page 267 after a lot of aforesaid dreary mathematics where he finally shares this with us:

“I certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions on the probable causes of the Ice Age”.

So Arrhenius was interested to find out if the fluctuations of CO2 levels could have caused the Ice Ages. And yes, he thinks that could have happened. I don’t know enough about climate science to tell you if this particular conclusion of his is correct today. But what he managed to accomplish though was to provide for the first time a way to mathematically calculate the amount of rise in temperature due the rise of CO2 levels. In other words, he found a direct relationship between the variations of CO2 and temperature.

Today, it turns out that his math was incorrect because he left out some other variables that influence the global temperature that were discovered and/or understood later (like the thickness of the atmosphere, the rate of ocean absorption  of CO2 and others which I won’t pretend I understand). Nevertheless, Arrhenius was the first to point out to the following relationship, which, by and large, is still relevant today:

“Thus if the quantity of carbonic acid increased in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression” (p. 267).

118 Arrhenius - Copy

P.S. Technically, Joseph Fourier should be credited with the discovery of global warming by means of increasing the levels of greenhouse gases in the atmosphere in 1824, but Arrhenius quantified it so I credited him. Feel fee to debate :).

REFERENCE: Arrhenius, S. (April 1896). XXXI. On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Fifth Series), 49 (251): 237-276. General Reference P.P.1433. doi: http://dx.doi.org/10.1080/14786449608620846. FREE FULLTEXT PDF

By Neuronicus, 24 June 2017

The FIRSTS: Magnolia (1703)

It is April and the Northern Hemisphere is enjoying the sight and smell of blooming magnolias. Fittingly, today is the birthday of the man who described and named the genus. Charles Plumier (20 April 1646 – 20 November 1704) was a French botanist known for describing many plant genera and for preceding Linnaeus in botanical taxonomy. His (Plumier’s) taxonomy was later incorporated by Linnaeus and is still in use today.

Plumier traveled a lot as part of his job as Royal Botanist at the court of Louis XIV. Don’t envy him too much though because the monk order to which he belonged, the Minims, forced him to be a vegan, living mostly on lentil.

Among thousands of other plants described was the magnolia, a genus of gorgeous ornamental flowering trees that put out spectacularly big flowers in the Spring, usually before the leaves come out. Plumier found it on the island of Martinique and named it after Pierre Magnol, a contemporary botanist who invented the concept of family as a distinct taxonomical category.

plate 1703 - Copy
Excerpts from the pages 38, 39 and plate 7 from Nova Plantarum Americanum Genera by Charles Plumier (Paris, 1703) describing the genus Magnolia.

Interestingly enough, Plumier named other plants either after famous botanists like fuchsia (Leonhard Fuchs) and lobelia (Mathias Obel) or people who helped his career as in begonia (Michel Begon) and suriana (Josephe Donat Surian), but never after himself. I guess he took seriously the humility tenet of his order. Never fear, the botanists Joseph Pitton de Tournefort and the much more renown Carl Linnaeus named an entire genus after him: Plumeria.

Of interest to me, as a neuroscientist, is that the bark of the magnolia tree contains magnolol which is a natural ligand for the GABAA receptor.

116 - Copy

REFERENCE: Plumier, C. (1703). Nova Plantarum Americanum Genera, Paris. http://dx.doi.org/10.5962/bhl.title.59135 FULLTEXT courtesy of the Biodiversity Heritage Library

By Neuronicus, 20 April 2017

Save

Save

The third eye

The pineal gland has held fascination since Descartes’ nefarious claim that it is the seat of the soul. There is no evidence of that; he said it might be where the soul resides because he thought the pineal gland was the only solitaire structure in the brain so it must be special. By ‘solitaire’ I mean that all other brain structures come in doublets: 2 amygdalae, 2 hippocampi, 2 thalami, 2 hemispheres etc. He was wrong about that as well, in that there are some other singletons in the brain besides the pineal, like the anterior or posterior commissure, the cerebellar vermis, some deep brainstem and medullary structures etc.

Descartes’ dualism was the only escape route the mystics at the time had from the demands for evidence by the budding natural philosophers later known as scientists. So when some scientists noted that some lizards have a third eye on top of their head connected to the pineal gland, the mystics and, later, the conspiracy theorists went nuts. Here, see, if the seat of the soul is linked with the third eye, then the awakening of this eye in people would surely result in heightened awareness, closeness to the Divinity, oneness with Universe and other similar rubbish that can otherwise easily and reliably be achieved by a good dollop of magic mushrooms. Cheaper, too.

Back to the lizards. Yes, you read right: some lizards and frogs have a third eye. This eye is not exactly like the other two, but it has cells sensitive to light, even if they are not perceiving light in the same way the retinal cells from the lateral eyes are. It is located on the top of the skull, so sometimes it is called the parietal organ (because it is in-between the parietal skull bones, see pic).

Anolis_carolinensis_parietal_eye.CC BY-SA 3.0jpg
Dorsal view of the head of the adult Carolina anole (Anolis carolinensis) clearly showing the parietal eye (small gray/clear oval) at the top of its head. Photo by TheAlphaWolf. License: CC BY-SA 3.0, courtesy of Wikipedia.

It is believed to be a vestigial organ, meaning that primitive vertebrates might have had it as a matter of course but it disappeared in the more recently evolved animals. Importantly, birds and mammals don’t have it. Not at all, not a bit, not atrophied, not able to be “awakened” no matter what your favorite “lemme see your chakras” guru says. Go on, touch your top of the skull and see if you have some peeking soft tissue there. And no, the soft tissue that babies are born with right there on the top of the skull is not a third eye; it’s a fontanelle that allows for the rapid expansion of the brain during the first year of life.

The parietal organ’s anatomical connection to the pineal gland is not surprising at all for scientists because the pineal’s role in every single animal that has it is the regulation of some circadian rhythms by the production of melatonin. In humans, the eyes tell the pineal that is day or night and the pineal adjusts the melatonin production accordingly, i.e. less melatonin produced during the day and more during the night. Likewise, the lizards’ third eye’s main role is to provide information to the pineal about the ambient light for thermoregulatory purposes.

After this long introduction, here is the point: almost twenty years ago Xiong et al. (1998) looked at how this third eye perceives light. In the human eye, light hitting the rods and cones in the retina (reception) launches a biochemical cascade (transduction) that results in seeing (coding of the stimulus in the brain). Briefly, transduction goes thusly: the photon(s) cause(s) a special protein sensitive to light (e.g. rhodopsin) in the photoreceptor cells in the retina to split into its components (photobleaching), one of these components (11-cis-retinal) changes its conformation (photoisomerization), then activates a G-protein (transducin), which then activates the enzyme phosphodiesterase (PDE), which then destroys a nucleotide called cyclic guanosine monophosphate (cGMP), which results in the closing of the cell’s sodium ion channels, which leads to less neurotransmitter released (glutamate), which causes the nearby cells (bipolar cells) to release the same neurotransmitter, which now has the opposite effect, meaning it increases the firing rate of another set of cells (ganglion cells) and from there to the brain we go. Phew, visual transduction IS difficult. And this is the brief version.

It turns out that the third eye’s retina doesn’t have all the types of cells that the normal eyes have. Specifically, it misses the bipolar, horizontal and amacrine cells, having only ganglion and photoreception cells. So how goes the phototransduction in the third eye’s retina, if at all?

Xiong et al. (1998) isolated photoreceptor cells from the third eyes of the lizard Uta stansburiana. And then they did a bunch of electrophysiological recording on those cells under different illumination and chemical conditions.

They found that the phototransduction in the third eye is different from the lateral eyes in that when they expected to see hyperpolarization of the cell, they observed depolarization instead. Also, when they expected the PDE to break down cGMP they found that PDE is inhibited thereby increasing the amount of cGMP.  The fact that G-protein can inhibit PDE was totally unexpected and showed a novel way of cellular signaling. Moreover, they speculate that their results can make sense only if not one, but two G-proteins with opposite actions work in tandem.

A probably dumb technical question though: the human rhodopsin takes about 30 minutes to restore itself from photobleaching. Xiong et al. (1998) let the cells adapt to dark for 10 minutes before recordings. So I wonder if the results would have been slightly different if they allowed the cell more time to adapt? But I’m not an expert in retina science, you’ve seen how difficult it is, right? Maybe the lizard proteins are different or rhodopsin adaptation time has little or nothing to do with their experiments? After all, later research has shown that the third eye has its own unique opsins, like the green-sensitive parietopsin discovered by Su et al. (2006).

115 third eye - Copy

REFERENCE:  Xiong WH, Solessio EC, & Yau KW (Sep 1998). An unusual cGMP pathway underlying depolarizing light response of the vertebrate parietal-eye photoreceptor. Nature Neuroscience, 1(5): 359-365. PMID: 10196524, DOI: 10.1038/1570. ARTICLE

Additional bibliography: Su CY, Luo DG, Terakita A, Shichida Y, Liao HW, Kazmi MA, Sakmar TP, Yau KW (17 Mar 2006). Parietal-eye phototransduction components and their potential evolutionary implications. Science, 311(5767): 1617-1621. PMID: 16543463, DOI: 10.1126/science.1123802. ARTICLE

By Neuronicus, 30 March 2017

Save

Save

Save

Don’t eat snow

Whoever didn’t roll out a tongue to catch a few snowflakes? Probably only those who never encountered snow.

The bad news is that snow, particularly urban snow is bad, really bad for you. The good news is that this was not always the case. So there is hope that in the far future it will be pristine again.

Nazarenko et al. (2016) constructed a very clever contraption that reminds me of NASA space exploration instruments. The authors refer to this by the humble name of ‘environmental chamber’, but is in fact a complex construction with different modules designed to measure out how car exhaust and snow interact (see Fig. 1).

110-copy-2
Fig. 1 from Nazarenko et al. (2016, DOI: 10.1039/c5em00616c). Released under CC BY-NC 3.0.

After many experiments, researchers concluded that snow absorbs pollutants very effectively. Among the many kinds of organic compounds soaked by snow in just one hour after exposure to fume exhaust, there were the infamous BTEX (benzene, toluene, ethylbenzene, and xylenes). The amounts of these chemicals in the snow were not at all negligible; to give you an example, the BTEX concentration increased from virtually 0 to 50 and up to 380 ug kg-1. The authors provide detailed measurements for all the 40+ compounds they have identified.

Needless to say, many these compounds are known carcinogenics. Snow absorbs them, alters their size distributions, and then it melts… Some of them may be released back in the air as they are volatile, some will go in the ground and rivers as polluted water. After this gloomy reality check, I’ll leave you with the words of the researchers:

“The accumulation and transfer of pollutants from exhaust – to snow – to meltwater need to be considered by regulators and policy makers as an important area of focus for mitigation with the aim to protect public health and the environment” (p. 197).

110-copy

Reference: Nazarenko Y, Kurien U, Nepotchatykh O, Rangel-Alvarado RB, & Ariya PA. (Feb 2016). Role of snow and cold environment in the fate and effects of nanoparticles and select organic pollutants from gasoline engine exhaust. Environmental Science: Processes & Impacts, 18(2):190-199. doi: 10.1039/c5em00616c. ARTICLE | FREE FULTEXT PDF 

By Neuronicus, 26 December 2016

Save

Save

Soccer and brain jiggling

There is no news or surprise that strong hits to the head produce transient or permanent brain damage. But how about mild hits produced by light objects like, say, a volley ball or soccer ball?

During a game of soccer, a player is allowed to touch the ball with any part of his/her body minus the hands. Therefore, hitting the ball with the head, a.k.a. soccer heading, is a legal move and goals marked through such a move are thought to be most spectacular by the refined connoisseur.

A year back, in 2015, the United States Soccer Federation forbade the heading of the ball by children 10 years old and younger after a class-action lawsuit against them. There has been some data that soccer players display loss of brain matter that is associated with cognitive impairment, but such studies were correlational in nature.

Now, Di Virgilio et al. (2016) conducted a study designed to explore the consequences of soccer heading in more detail. They recruited 19 young amateur soccer players, mostly male, who were instructed to perform 20 rotational headings as if responding to corner kicks in a game. The ball was delivered by a machine at a speed of approximately 38 kph. The mean force of impact for the group was 13.1 ± 1.9 g. Immediately after the heading session and at 24 h, 48 h and 2 weeks post-heading, the authors performed a series of tests, among which are a transcranial magnetic stimulation (TMS) recording, a cognitive function assessment (by using the Cambridge Neuropsychological Test Automated Battery), and a postural control test.

Not being a TMS expert myself, I was wondering how do you record with a stimulator? TMS stimulates, it doesn’t measure anything. Or so I thought. The authors delivered brief  (1 ms) stimulating impulses to the brain area that controls the leg (primary motor cortex). Then they placed an electrode over the said muscle (rectus femoris or quadriceps femoris) and recorded how the muscle responded. Pretty neat. Moreover, the authors believe that they can make inferences about levels of inhibitory chemicals in the brain from the way the muscle responds. Namely, if the muscle is sluggish in responding to stimulation, then the brain released an inhibitory chemical, like GABA (gamma-amino butyric acid), hence calling this process corticomotor inhibition. Personally, I find this GABA inference a bit of a leap of faith, but, like I said, I am not fully versed in TMS studies so it may be well documented. Whether or not GABA is responsible for the muscle sluggishness, one thing is well documented though: this sluggishness is the most consistent finding in concussions.

The subjects had impaired short term and long term memory functions immediately after the ball heading, but not 24 h or more later. Also transient was the corticomotor inhibition. In other words, soccer ball heading results in measurable changes in brain function. Changes for the worst.

Even if these changes are transient, there is no knowing (as of yet) what prolonged ball heading might do. There is ample evidence that successive concussions have devastating effects on the brain. Granted, soccer heading does not produce concussions, at least in this paper’s setting, but I cannot think that even sub-concussion intensity brain disruption can be good for you.

On a lighter note, although the title of the paper features the word “soccer”, the rest o the paper refers to the game as “football”. I’ll let you guess the authors’ nationality or at least the continent of provenance ;).

109-football-copy

Reference: Di Virgilio TG, Hunter A, Wilson L, Stewart W, Goodall S, Howatson G, Donaldson DI, & Ietswaart M. (Nov 2016, Epub 23 Oct 2016). Evidence for Acute Electrophysiological and Cognitive Changes Following Routine Soccer Heading. EBioMedicine, 13:66-71. PMID: 27789273, DOI: 10.1016/j.ebiom.2016.10.029. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 20 December 2016

Apparently, scientists don’t know the risks & benefits of science

If you want to find out how bleach works or what keeps the airplanes in the air or why is the rainbow the same sequence of colors or if it’s dangerous to let your kid play with snails would you ask a scientist or your local priest?

The answer is very straightforward for most of the people. Just that for a portion of the people the straightforwardness is viewed by the other portion as corkscrewedness. Or rather just plain dumb.

Cacciatore et al. (2016) asked about 5 years ago 2806 American adults how much they trust the information provided by religious organizations, university scientists, industry scientists, and science/technology museums. They also asked them about their age, gender, race, socioeconomic status, income as well as about Facebook use, religiosity, ideology, and attention to science-y content.

Almost 40% of the sample described themselves as Evangelical Christians, one of the largest religious group in USA. These people said they trust more their religious organizations then scientists (regardless of who employs these scientists) to tell the truth about the risks and benefits of technologies and their applications.

The data yielded more information, like the fact that younger, richer, liberal, and white people tended to trust scientists more then their counterparts. Finally, Republicans were more likely to report a religious affiliation than Democrats.

I would have thought that everybody would prefer to take advice about science from a scientist. Wow, what am I saying, I just realized what I typed… Of course people are taking health advice from homeopaths all the time, from politicians rather than environment scientists, from alternative medicine quacks than from doctors, from no-college educated than geneticists. From this perspective then, the results of this study are not surprising, just very very sad… I just didn’t think that the gullible people can also be grouped by political affiliations. I though the affliction is attacking both sides of an ideological isle in a democratic manner.

Of course, this is a survey study, therefore a lot more work is needed to properly generalize these results, from expanding the survey sections (beyond the meager 1 or 2 questions per section) to validation and replication. Possibly, even addressing different aspects of science because, for instance, climate change is a much more touchy subject than, say, apoptosis. And replace or get rid of the “Scientists know best what is good for the public” item; seriously, I don’t know any scientist, including me, who would answer yes to that question. Nevertheless, the trend is, like I said, sad.

107-copy

Reference:  Cacciatore MA, Browning N, Scheufele DA, Brossard D, Xenos MA, & Corley EA. (Epub ahead of print 25 Jul 2016). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science. PMID: 27458117, DOI: 10.1177/0963662516661090. ARTICLE | NPR cover

By Neuronicus, 7 December 2016

Save

Save

Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.

104-copy

unless you’re left-handed, then things are reversed.

References:

1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016

Save

Pic of the Day: Russell on stupid

russell-copy-2

Reference: Russell, B. (10 May 1933). “The Triumph of Stupidity”. In: H. Ruja (Ed.), Mortals and Others: Bertrand Russell’s American Essays, Volume 2, 1931–1935.

The history of the quote and variations of it by others can be found on the Quote Investigator.

By Neuronicus, 6 November 2016

Pic of the Day: Neil on teaching creationism

104neil-copy
Dr. deGrasse Tyson’s picture is from Wikimedia released under PD and the quote is from a “Letter to the Editor” of New York Times retrieved from the Hayden Planetarium website on Nov. 2, 2016.

The FIRSTS: The Name of Myelin (1854)

One reason why I don’t post more often is that I have such a hard time deciding what to cover (Hint: send me stuff YOU find awesome). Most of the cool and new stuff is already covered by big platforms with full-time employees and I try to stay away of the media-grabbers. Mostly. Some papers I find so cool that it doesn’t matter that professional science journalists have already covered them and I too jump on the wagon with my meager contribution. Anyway, here is a glimpse on how my train of thought goes on inspiration-less days.

Inner monologue: Check the usual journals’ current issues. Nothing catches my eye. Maybe I’ll feature a historical. Open Wikipedia front page and see what happened today throughout history. Aha, apparently Babinski died in 1932. He’s the one who described the Babinski’s sign. Normally, when the sole of the foot is stroked, the big toe flexes inwards, towards the sole. If it extends upwards, then that’s a sure sign of neurological damage, the Babinski’s sign. But healthy infants can have that sign too not because they have neurological damage, but because their corticospinal neurons are not fully myelinated. Myelin, who discovered that? Probably Schwann. Quick search on PubMed. Too many. Restrict to ‘history”. I hate the search function on PubMed, it brings either to many or no hits, no matter the parameters. Ah, look, Virchow. Interesting. Aha. Find the original reference. Aha. Springer charges 40 bucks for a paper published in 1854?! The hell with that! I’m not even going to check if I have institutional access. Get the pdf from other sources. It’s in German. Bummer. Go to Highwire. Find recent history of myelin. Mielinization? Myelination? Myelinification? All have hits… Get “Fundamental Neuroscience” off of the shelf and check… aha, myelination. Ok. Look at the pretty diagram with the saltatory conduction! Enough! Go back to Virchow. Does it have pictures, maybe I can navigate the legend? Nope. Check if any German speaking friends are online. Nope, they’re probably asleep, which is what I should be doing. Drat. Refine Highwire search. Evrika! “Hystory of Myelin” by Boullerne, 2016. Got the author manuscript. Hurray. Read. Write.

Myelinated fibers, a.k.a. white matter has been observed and described by various anatomists, as early as the 16th century, Boullerne (2016) informs us. But the name of myelin was given only in 1854 by Rudolph Virchow, a physician with a rich academic and public life. Although Virchow introduced the term to distinguish between bone marrow and the medullary substance, paradoxically, he managed to muddy waters even more because he did not restrict the usage of the term mylein to … well, myelin. He used it also to refer to substances in blood cells and egg’s yolk and spleen and, frankly, from the quotes provided in the paper, I cannot make heads or tails of what Virchow thought myelin was. The word myelin comes form the Greek myelos or muelos, which means marrow.

Boullerne (2016) obviously did a lot of research, as the 53-page account is full of quotes from original references. Being such a scholar on the history of myelin I have no choice but to believe her when she says: “In 1868, the neurologist Jean-Martin Charcot (1825-1893) used myelin (myéline) in what can be considered its first correct attribution.”

So even if Virchow coined the term, he was using it incorrectly! Nevertheless, in 1858 he correctly identified the main role of myelin: electrical insulation of the axon. Genial insight for the time.

103-copy

I love historical reviews of sciency stuff. This one is a ‘must-have’ for any biologist or neuroscientist. Chemists and physicists, too, don’t shy away; the paper has something for you too, like myelin’s biochemistry or its birefringence properties.

Reference: Boullerne, AI (Sep 2016, Epub 8 Jun 2016). The history of myelin. Experimental Neurology, 283(Pt B): 431-45. doi: 10.1016/j.expneurol.2016.06.005. ARTICLE

Original Reference: Virchow R. (Dec 1854). Ueber das ausgebreitete Vorkommen einer dem Nervenmark analogen Substanz in den thierischen Geweben. Archiv für pathologische Anatomie und Physiologie und für klinische Medicin, 6(4): 562–572. doi:10.1007/BF02116709. ARTICLE

P.S. I don’t think is right that Springer can retain the copyright for the Virchow paper and charge $39.95 for it. I don’t think they have the copyright for it anyway, despite their claims, because the paper is 162 years old. I am aware of no German or American copyright law that extends for so long. So, if you need it for academic purposes, write to me and thou shalt have it.

By Neuronicus, 29 October 2016

Save