The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is the MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps the coils make. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data, meaning the questionnaires results showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding on its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram; Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

Advertisements

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is slightly incorrect. Because only in particular conditions this illusion of control applies, and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

The FIRSTS: Dinosaurs and reputation (1842)

‘Dinosaur’ is a common noun in most languages of the Globe and, in its weak sense, it means “extinct big-sized reptile-like animal that lived a long-time ago”. The word has been in usage for so long that it can be used also for describing something “impractically large, out-of-date, or obsolete” (Merriam-Webster dictionary). “Dinosaur” is a composite of two ancient Greek words (“deinos”, “sauros”) and it means “terrible lizard”.

But, it turns out that the word hasn’t been in usage for so long, just for a mere 175 years. Sir Richard Owen, a paleontologist that dabbled in many disciplines, coined the term in 1842. Owen introduced the taxon Dinosauria as if it was always called thus, no fuss: “The present and concluding part of the Report on British Fossil Reptiles contains an account of the remains of the Crocodilian, Dinosaurian, Lacertian, Pterodactylian, Chelonian, Ophidian and Batrachian reptiles.” (p. 60). Only later in the Report does he tell us his paleontological reasons for the baptism, namely some anatomical features that distinguish dinosaurs from crocodiles and other reptiles.

“…The combination of such characters, some, as the sacral ones, altogether peculiar among Reptiles, others borrowed, as it were, from groups now distinct from each other, and all manifested by creatures far surpassing in size the largest of existing reptiles, will, it is presumed, be deemed sufficient ground for establishing a distinct tribe or sub-order of Saurian Reptiles, for which I would propose the name of Dinosauria.” (p.103)

At the time he was presenting this report to the British Association for the Advancement of Science, other giants of biology were running around the same halls, like Charles Darwin and Thomas Henry Huxley. Indisputably, Owen had a keen observational eye and a strong background in comparative anatomy that resulted in hundreds of published works, some of them excellent. That, in addition to establishing the British Museum of Natural History.

Therefore, Owen had reasons to be proud of his accomplishments and secure in his influence and legacy, and yet his contemporaries tell us that he was an absolutely vicious man, spiteful to the point of obsession, vengeful and extremely jealous of other people’s work. Apparently, he would steal the work of the younger people around him, never give credit, lie and cheat at every opportunity, and even write lengthy anonymous letters to various printed media to denigrate his contemporaries. He seemed to love his natal city of Lancaster and his family though (Wessels & Taylor, 2015).

121Richard-owen _PD
Sir Richard Owen (20 July 1804 – 18 December 1892). PD, courtesy of Wikipedia.

Owen had a particular hate for Darwin. They had been close friends for 20 years and then Darwin published the “Origin of Species”. The book quickly became widely read and talked about and then poof: vitriol and hate. Darwin himself said the only reason he could think of for Owen’s hatred was the popularity of the book.

Various biographies and monographers seem to agree on his unpleasant personality (see his entry in The Telegraph, Encyclopedia.com, Encylopaedia Britannica, BBC. On a side note, should you be concerned about your legacy and have the means to persuade The Times to write you an obituary, by all means, do so. In all the 8 pages of obituary written in 1896 you will not find a single blemish on the portrait of Sir Richard Owen.

This makes me ponder on the judgement of history based not on your work, but on your personality. As I said, the man contributed to science in more ways than just naming the dinosaur and having spats with Darwin. And yet it seems that his accomplishments are somewhat diminished by the way he treated others.

This reminded me of Nicolae Constantin Paulescu, a Romanian scientist who discovered insulin in 1916 (published in 1921). Yes, yes, I know all about the controversy with the Canadians that extracted and purified the insulin in 1922 and got the Nobel for it in 1923. Paulescu did the same, even if Paulescu’s “pancreatic extract” from a few years earlier was insufficiently purified; it still successfully lowered the glicemic index in dogs. He even obtained a patent for the “fabrication of pancrein” (his name for insulin, because he obtained it from the pancreas) in April 1922 from the Romanian Government (patent no. 6255). The Canadian team was aware of his work, but because it was published in French, they had a poor translation and they misunderstood his findings, so, technically, they didn’t steal anything. Or so they say. Feel free to feed the conspiracy mill. I personally don’t know, I haven’t looked at the original work to form an opinion because it is in French and my French is non-existent.

Annnywaaaay, whether or not Paulescu was the first in discovering the insulin is debatable, but few doubt that he should have shared the Nobel at least.

Rumor has it that Paulescu did not share the Nobel because he was a devout Nazi. His antisemitic writings are remarkably horrifying, even by the standards of the extreme right. That’s also why you won’t hear about him in medical textbooks or at various diabetes associations and gatherings. Yet millions of people worldwide may be alive today because of his work, at least partly.

How should we remember? Just the discoveries and accomplishments with no reference to the people behind them? Is remembering the same as honoring? “Clara cells” were lung cells discovered by the infamous Nazi anatomist Max Clara by dissecting prisoners without consent. They were renamed by the lung community “club cells” in 2013. We cannot get rid of the discovery, but we can rename the cells, so it doesn’t look like we honor him. I completely understand that. And yet I also don’t want to lose important pieces of history because of the atrocities (in the case of Nazis) or unsavory behavior (in the case of Owen) committed by our predecessors. I understand why the International Federation of Diabetes does not wish to give awards in the name of Paulescu or have a Special Paulescu lecture. Perhaps the Romanians should take down his busts and statues, too. But I don’t understand why (medical) history books should exclude him.

In other words, don’t honor the unsavories of history, but don’t forget them either. You never know what we – or the future generations – may learn by looking back at them and their actions.

123 - Copy.jpg

By Neuronicus, 19 October 2017

References:

1) Owen, R (1842). “Report on British Fossil Reptiles”. Part II. Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. p. 60–204. Google Books Fulltext 

2) “Eminent persons: Biographies reprinted from the Times, Vol V, 1891–1892 – Sir Richard Owen (Obituary)” (1896). Macmillan & Co., p. 291–299. Google Books Fulltext

3) Wessels Q & Taylor AM (28 Oct 2015). Anecdotes to the life and times of Sir Richard Owen (1804-1892) in Lancaster. Journal of Medical Biography. pii: 0967772015608053. PMID: 26512064, DOI: 10.1177/0967772015608053. ARTICLE

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials that, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it aint’s that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Old chimpanzees get Alzheimer’s pathology

Alzheimer’s Disease (AD) is the most common type of dementia with a progression that can span decades. Its prevalence is increasing steadily, particularly in the western countries and Australia. So some researchers speculated that this particular disease might be specific to humans. For various reasons, either genetic, social, or environmental.

A fresh e-pub brings new evidence that Alzheimer’s might plague other primates as well. Edler et al. (2017) studied the brains of 20 old chimpanzees (Pan troglodytes) for a whole slew of Alzheimer’s pathology markers. More specifically, they looked for these markers in brain regions commonly affected by AD, like the prefrontal cortex, the midtemporal gyrus, and the hippocampus.

Alzheimer’s markers, like Tau and Aβ lesions, were present in the chimpanzees in an age-dependent manner. In other words, the older the chimp, the more severe the pathology.

Interestingly, all 20 animals displayed some form of Alzheimer’s pathology. This finding points to another speculation in the field which is: dementia is just part of normal aging. Meaning we would all get it, eventually, if we would live long enough; some people age younger and some age older, as it were. This hypothesis, however, is not favored by most researchers not the least because is currently unfalsifiable. The longest living humans do not show signs of dementia so how long is long enough, exactly? But, as the authors suggest, “Aβ deposition may be part of the normal aging process in chimpanzees” (p. 24).

Unfortunately, “the chimpanzees in this study did not participate in formal behavioral or cognitive testing” (p. 6). So we cannot say if the animals had AD. They had the pathological markers, yes, but we don’t know if they exhibited the disease as is not uncommon to find these markers in humans who did not display any behavioral or cognitive symptoms (Driscoll et al., 2006). In other words, one might have tau deposits but no dementia symptoms. Hence the title of my post: “Old chimpanzees get Alzheimer’s pathology” and not “Old chimpanzees get Alzheimer’s Disease”

Good paper, good methods and stats. And very useful because “chimpanzees share 100% sequence homology and all six tau isoforms with humans” (p. 4), meaning we have now a closer to us model of the disease so we can study it more, even if primate research has taken significant blows these days due to some highly vocal but thoroughly misguided groups. Anyway, the more we know about AD the closer we are of getting rid of it, hopefully. And, soon enough, the aforementioned misguided groups shall have to face old age too with all its indignities and my guess is that in a couple of decades or so there will be fresh money poured into aging diseases research, primates be damned.

121-chimps get Alz - Copy

REFERENCE: Edler MK, Sherwood CC, Meindl RS, Hopkins WD, Ely JJ, Erwin JM, Mufson EJ, Hof PR, & Raghanti MA. (EPUB July 31, 2017). Aged chimpanzees exhibit pathologic hallmarks of Alzheimer’s disease. Neurobiology of Aging, PII: S0197-4580(17)30239-7, DOI: http://dx.doi.org/10.1016/j.neurobiolaging.2017.07.006. ABSTRACT  | Kent State University press release

By Neuronicus, 23 August 2017

Save

Save

Midichlorians, midichloria, and mitochondria

Nathan Lo is an evolutionary biologist interested in creepy crawlies, i.e. arthropods. Well, he’s Australian, so I guess that comes with the territory (see what I did there?). While postdoc’ing, he and his colleagues published a paper (Sassera et al., 2006) that would seem boring for anybody without an interest in taxonomy, a truly under-appreciated field.

The paper describes a bacterium that is a parasite for the mitochondria of a tick species called Ixodes ricinus, the nasty bugger responsible for Lyme disease. The authors obtained a female tick from Berlin, Germany and let it feed on a hamster until it laid eggs. By using genetic sequencing (you can use kits these days to extract the DNA, do PCR, gels and cloning, pretty much everything), electron microscopy (real powerful microscopes) and phylogenetic analysis (using computer softwares to see how closely related some species are) the authors came to the conclusion that this parasite they were working on is a new species. So they named it. And below is the full account of the naming, from the horse’s mouth, as it were:

“In accordance with the guidelines of the International Committee of Systematic Bacteriology, unculturable bacteria should be classified as Candidatus (Murray & Stackebrandt, 1995). Thus we propose the name ‘Candidatus Midichloria mitochondrii’ for the novel bacterium. The genus name Midichloria (mi.di.chlo′ria. N.L. fem. n.) is derived from the midichlorians, organisms within the fictional Star Wars universe. Midichlorians are microscopic symbionts that reside within the cells of living things and ‘‘communicate with the Force’’. Star Wars creator George Lucas stated that the idea of the midichlorians is based on endosymbiotic theory. The word ‘midichlorian’ appears to be a blend of the words mitochondrion and chloroplast. The specific epithet, mitochondrii (mi.to′chon.drii. N.L. n. mitochondrium -i a mitochondrion; N.L. gen. n. mitochondrii of a mitochondrion), refers to the unique intramitochondrial lifestyle of this bacterium. ‘Candidatus M. mitochondrii’ belongs to the phylum Proteobacteria, to the class Alphaproteobacteria and to the order Rickettsiales. ‘Candidatus M. mitochondrii’ is assigned on the basis of the 16S rRNA (AJ566640) and gyrB gene sequences (AM159536)” (p. 2539).

George Lucas gave his blessing to the Christening (of course he did).

119-midi - Copy1 - Copy.jpg

Acknowledgements: Thanks go to Ms. BBD who prevented me from making a fool of myself – this time – on the social media by pointing out to me that midichloria are real and that they are a mitochondrial parasite.

REFERENCE: Sassera D, Beninati T, Bandi C, Bouman EA, Sacchi L, Fabbi M, Lo N. (Nov. 2006). ‘Candidatus Midichloria mitochondrii’, an endosymbiont of the tick Ixodes ricinus with a unique intramitochondrial lifestyle. International Journal of Systematic and Evolutionary Microbiology, 56(Pt 11): 2535-2540. PMID: 17082386, DOI: 10.1099/ijs.0.64386-0. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 29 July 2017

Pic of the day: Skunky beer

120 skunky beer - Copy

 

REFERENCE: Burns CS, Heyerick A, De Keukeleire D, Forbes MD. (5 Nov 2001). Mechanism for formation of the lightstruck flavor in beer revealed by time-resolved electron paramagnetic resonance. Chemistry – The European Journal, 7(21): 4553-4561. PMID: 11757646, DOI: 10.1002/1521-3765(20011105)7:21<4553::AID-CHEM4553>3.0.CO;2-0. ABSTRACT

By Neuronicus, 12 July 2017

The FIRSTS: Increase in CO2 levels in the atmosphere results in global warming (1896)

Few people seem to know that although global warming and climate change are hotly debated topics right now (at least on the left side of the Atlantic) the effect of CO2 levels on the planet’s surface temperature was investigated and calculated more than a century ago. CO2 is one of the greenhouse gases responsible for the greenhouse effect, which was discovered by Joseph Fourier in 1824 (the effect, that is).

Let’s start with a terminology clarification. Whereas the term ‘global warming’ was coined by Wallace S. Broecker in 1975, the term ‘climate change’ underwent a more fluidic transformation in the ’70s from ‘inadvertent climate modification’ to ‘climatic change’ to a more consistent use of ‘climate change’ by Jule Charney in 1979, according to NASA. The same source tells us:

“Global warming refers to surface temperature increases, while climate change includes global warming and everything else that increasing greenhouse gas amounts will affect”.

But before NASA there was one Svante August Arrhenius (1859–1927). Dr. Arrhenius was a Swedish physical chemist who received the Nobel Prize in 1903 for uncovering the role of ions in how electrical current is conducted in chemical solutions.

S.A. Arrhenius was the first to quantify the variations of our planet’s surface temperature as a direct result of the amount of CO2 (which he calls carbonic acid, long story) present in the atmosphere. For those – admittedly few – nitpickers that say his views on the greenhouse effect were somewhat simplistic and his calculations were incorrect I’d say cut him a break: he didn’t have the incredible amount of data provided by the satellites or computers, nor the work of thousands of scientists over a century to back him up. Which they do. Kind of. Well, the idea, anyway, not the math. Well, some of the math. Let me explain.

First, let me tell you that I haven’t managed to pass past page 3 of the 39 pages of creative mathematics, densely packed tables, parameter assignments, and convoluted assumptions of Arrhenius (1896). Luckily, I convinced a spectroscopist to take a crack at the original paper since there is a lot of spectroscopy in it and then enlighten me.

118Boltzmann-grp - Copy
The photo was taken in 1887 and shows (standing, from the left): Walther Nernst (Nobel in Chemistry), Heinrich Streintz, Svante Arrhenius, Richard Hiecke; (sitting, from the left): Eduard Aulinger, Albert von Ettingshausen, Ludwig Boltzmann, Ignaz Klemenčič, Victor Hausmanninger. Source: Universität Graz. License: PD via Wikimedia Commons.

Second, despite his many accomplishments, including being credited with laying the foundations of a new field (physical chemistry), Arrhenius was first and foremost a mathematician. So he employed a lot of tedious mathematics (by hand!) together with some hefty guessing along with what was known at the time about Earth’s infrared radiation, solar radiation, water vapor and CO2 absorption, temperature of the Moon,  greenhouse effect, and some uncalibrated spectra taken by his predecessors to figure out if “the mean temperature of the ground [was] in any way influenced by the presence of the heat-absorbing gases in the atmosphere” (p. 237). Why was he interested in this? We find out only at page 267 after a lot of aforesaid dreary mathematics where he finally shares this with us:

“I certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions on the probable causes of the Ice Age”.

So Arrhenius was interested to find out if the fluctuations of CO2 levels could have caused the Ice Ages. And yes, he thinks that could have happened. I don’t know enough about climate science to tell you if this particular conclusion of his is correct today. But what he managed to accomplish though was to provide for the first time a way to mathematically calculate the amount of rise in temperature due the rise of CO2 levels. In other words, he found a direct relationship between the variations of CO2 and temperature.

Today, it turns out that his math was incorrect because he left out some other variables that influence the global temperature that were discovered and/or understood later (like the thickness of the atmosphere, the rate of ocean absorption  of CO2 and others which I won’t pretend I understand). Nevertheless, Arrhenius was the first to point out to the following relationship, which, by and large, is still relevant today:

“Thus if the quantity of carbonic acid increased in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression” (p. 267).

118 Arrhenius - Copy

P.S. Technically, Joseph Fourier should be credited with the discovery of global warming by means of increasing the levels of greenhouse gases in the atmosphere in 1824, but Arrhenius quantified it so I credited him. Feel fee to debate :).

REFERENCE: Arrhenius, S. (April 1896). XXXI. On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Fifth Series), 49 (251): 237-276. General Reference P.P.1433. doi: http://dx.doi.org/10.1080/14786449608620846. FREE FULLTEXT PDF

By Neuronicus, 24 June 2017

Arnica and a scientist’s frustrations

angry-1372523 - CopyWhen you’re the only scientist in the family you get asked the weirdest things. Actually, I’m not the only one, but the other one is a chemist and he’s mostly asked about astrophysics stuff, so he doesn’t really count, because I am the one who gets asked about rare diseases and medication side-effects and food advice. Never mind that I am a neuroscientist and I have professed repeatedly and quite loudly my minimum knowledge of everything from the neck down, all eyes turn to me when the new arthritis medication or the unexpected side-effects of that heart drug are being brought up. But, curiously, if I dare speak about brain stuff I get the looks that a thing the cat just dragged in gets. I guess everybody is an expert on how the brain works on account of having and using one, apparently. Everybody, but the actual neuroscience expert whose input on brain and behavior is to be tolerated and taken with a grain of salt at best, but whose opinion on stomach distress is of the utmost importance and must be listened to reverentially in utter silence [eyes roll].

So this is the background on which the following question was sprung on me: “Is arnica good for eczema?”. As always, being caught unawares by the sheer diversity of interests and afflictions my family and friends can have, I mumbled something about I don’t know what arnica is and said I will look it up.

This is an account of how I looked it up and what conclusions I arrived to or how a scientist tries to figure something out completely out of his or her field. First thing I did was to go on Wikipedia. Hold your horses, it was not about scientific information but for a first clarification step: is it a chemical, a drug, an insect, a plant maybe? I used to encourage my students to also use Wikipedia when they don’t have a clue what a word/concept/thing is. Kind of like a dictionary or a paper encyclopedia, if you will. To have a starting point. As a matter of fact Wikipedia is an online encyclopedia, right? Anyway, I found out that Arnica is a plant genus out of which one species, Arnica Montana, seems to be popular.

Then I went to the library. Luckily for me, the library can be accessed online from the comfort of my home and in my favorite pajamas in the incarnation of PubMed or Medline as it used to be affectionately called. It is the US National Library of Medicine maintained by the National Institutes of Health, a wonderful repository of scholarly papers (yeah, Google Scholar to PubMed is like the babbling of a two-year old to the Shakespearian sonnets; Google also has an agenda, which you won’t find on PubMed). Useful tip: when you look for a paper that is behind a paywall in Nature or Elsevier Journals or elsewhere, check the PubMed too because very few people seem to know that there is an obscure and incredibly helpful law saying that research paid by the US taxpayers should be available to the US taxpayer. A very sensible law passed only a few years ago that has the delightful effect of having FREE full text access to papers after a certain amount of months from publishing (look for the PMC icon in the upper right corner).

I searched for “arnica” and got almost 400 results. I sorted by “most recent”. The third hit was a review. I skimmed it and seemed to talk a lot about healing in homeopathy, at which point, naturally, I got a gloomy foreboding. But I persevered because one data point does not a trend make. Meaning that you need more than a paper – or a handful – to form an informed opinion. This line of thinking has been rewarded by the hit No. 14 in the search which had an interesting title in the sense that it was the first to hint to a mechanism through which this plant was having some effects. Mechanisms are important, they allow you to differentiate speculation from findings, so I always prefer papers that try to answer a “How?” question as opposed to the other kinds; whys are almost always speculative as they have a whiff of post factum rationalizations, whats are curious observations but, more often than not, a myriad factors can account for them, whens are an interesting hybrid between the whats and the hows – all interesting reads but for different purposes. Here is a hint: you want to publish in Nature or Science? Design an experiment that answers all the questions. Gone are the days when answering one question was enough to publish…

Digressions aside, the paper I am covering today sounds like a mechanism paper. Marzotto et al. (2016) cultured a particular line of human cells in a Petri dish destined to test the healing powers of Arnica montana. The experimental design seems simple enough: the control culture gets nothing and the experimental culture gets Arnica montana. Then, the authors check to see if there are differences in gene expressions between the two groups.

The authors applied different doses of Arnica montana to the cultures to see if the effects are dose-dependant. The doses used were… wait, bear with me, I’m not familiar with the system, it’s not metric. In the Methods, the authors say

Arnica m. was produced by Boiron Laboratoires (Lyon, France) according to the French Homeopathic pharmacopoeia and provided as a first centesimal dilution (Arnica m. 1c) of the hydroalcoholic extract (Mother Tincture, MT) in 30% ethanol/distilled water”.

Wait, what?! Centesimal… centesimal… wasn’t that the nothing-in-it scale from the pseudoscientific bull called homeopathy? Maybe I’m wrong, maybe there are some other uses for it and becomes clear later:

Arnica m. 1c was used to prepare the second centesimal dilution (Arnica m. 2c) by adding 50μl of 1c solution to 4.95ml of distilled ultra-pure water. Therefore, 2c corresponds to 10−4 of the MT”.

Holy Mother of God, this is worse than gibberish; this is voluntary misdirection, crap wrapped up in glitter, medieval tinkering sold as state-of-the-art 21st century science. Speaking of state-of-the-art, the authors submit their “doses” to a liquid chromatograph, a thin layer chromatograph, a double-beam spectrophotometer, a nanoparticle tracking analysis (?!) for what purposes I cannot fathom. On, no, I can: to sound science-y. To give credibility for the incredulous. To make money.

At which point I stopped reading the ridiculous nonsense and took a closer look at the authors and got hit with this:

“Competing Interests: The authors have declared that no competing interests exist. This study was funded by Boiron Laboratoires Lyon with a research agreement in partnership with University of Verona. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.”

No competing interests?? The biggest manufacturer of homeopathic crap in the world pays you to see if their product works and you have no competing interest? Maybe no other competing interests. There were some comments and replies to this paper after that, but it is all inconsequential because once you have faulty methods your results are irrelevant. Besides, the comments are from the same University, could be some internal feuding.

PLoS One, what have you done? You’re a peer-reviewed open access journal! What “peers” reviewed this paper and gave their ok for publication? Since when is homeopathy science?! What am I going to find that you publish next? Astrology? For shame… Give me that editor’s job because I am certain I can do better.

To wrap it up and tell you why I am so mad. The homeopathic scale system, that centesimal gibberish, is just that: gibberish. It is impossible to replicate this experiment without the product marketed by Boiron because nobody knows how much of the plant is in the dose, which parts of the plant, what kind of extract, or what concentration. So it’s like me handing you my special potion and telling you it makes warts disappear because it has parsley in it. But I don’t tell you my recipe, how much, if there anything else besides parsley in it, if I used the roots or only the leaves or anything. Now that, my friends, it’s not science, because science is REPLICABLE. Make no mistake: homeopathy is not science. Just like the rest of alternative medicine, homeopathy is a ruthless and dangerous business that is in sore need of lawmakers’ attention, like FDA or USDA. And for those who think this is a small paper, totally harmless, no impact, let me tell you that this paper had over 20,000 views.

I would have oh so much more to rant on. But enough. Rant over.

Oh, not yet. Lastly, I checked a few other papers about arnica and my answer to the eczema question is: “It’s possible but no, I don’t think so. I don’t know really, I couldn’t find any serious study about it and I gave up looking after I found a lot of homeopathic red flags”. The answer I will give my family member? “Not the product you have, no. Go to the doctors, the ones with MDs after their name and do what they tell you. In addition, I, the one with a PhD after my name, will tell you this for free because you’re family: rub the contents of this bottle only once a day – no more! – on the affected area and you will start seeing improvements in three days. Do not use elsewhere, it’s quite potent!” Because placebo works and at least my water vial is poison free.

117 - Copy

Reference: Marzotto M, Bonafini C, Olioso D, Baruzzi A, Bettinetti L, Di Leva F, Galbiati E, & Bellavite P (10 Nov 2016). Arnica montana Stimulates Extracellular Matrix Gene Expression in a Macrophage Cell Line Differentiated to Wound-Healing Phenotype. PLoS One, 11(11):e0166340. PMID: 27832158, PMCID: PMC5104438, DOI: 10.1371/journal.pone.0166340. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 10 June 2017

Save