The FIRSTS: mRNA from one cell can travel to another cell and be translated there (2006)

I’m interrupting the series on cognitive biases (unskilled-and-unaware, superiority illusion, and depressive realism) to tell you that I admit it, I’m old. -Ish. Well, ok, I’m not that old. But this following paper made me feel that old. Because it invalidates some stuff I thought I knew about molecular cell biology. Mind totally blown.

It all started with a paper freshly published (two days ago) and that I’ll cover tomorrow. It’s about what the title says: mRNA can travel between cells packaged nicely in vesicles and once in a target cell can be made into protein there. I’ll explain – briefly! – why this is such a mind-blowing thing.

Central_dogma - Copy
Fig. 1. Illustration of the central dogma of biology: information transfer between DNA, RNA, and protein. Courtesy of Wikipedia, PD

We’ll start with the central dogma of molecular biology (specialists, please bear with me): the DNA is transcribed into RNA and the RNA is translated into protein (see Fig. 1). It is an oversimplification of the complexity of information flow in a biological system, but it’ll do for our purposes.

DNA needs to be transcribed into RNA because RNA is a much more flexible molecule and thus can do many things. So RNA is the traveling mule between DNA and the place where its information becomes protein, i.e. ribosome. Hence the name mRNA. Just kidding; m stands for messenger RNA (not that I will ever be able to call that ever again: muleRNA is stuck in my brain now).

There are many kinds of RNA: some don’t even get out of the nucleus, some are chopped and re-glued (alternative splicing), some decide which bits of DNA (genes) are to be expressed, some are busy housekeepers and so on. Once an RNA has finished its business it is degraded in many inventive ways. It cannot leave the cell because it cannot cross the cell membrane. And that was that. Or so I’ve been taught.

Exceptions from the above were viruses whose ways of going from cell to cell are very clever. A virus is a stretch of nucleic acids (DNA and/or RNA) and some proteins encapsulated in a blob (capsid). Not a cell!

In the ’90s several groups were looking at some blobs (yes, most stuff in biology can be defined by the all-encompassing and enlightening term of ‘blob’) that cells spew out every now and then. These were termed extracellular vesicles (EV) for obvious reasons. Turned out that many kinds of cells were doing it and on a much more regular basis than previously thought. The contents of these EVs varied quite a bit, based on the type of cells studied. Proteins, mostly, and maybe some cytoplasmic debris. In the ’80s it was thought that this was one way for a cell to get rid of trash. But in 1982, Stegmayr & Ronquist showed that prostate cells release some EVs that result in sperm cell motility increase (Raposo & Stoorvogel, 2013) so, clearly, the EVs were more than trash. Soon it became evident that EVs were another way of cell-to-cell communication. (Note to self: the first time intercellular communication by EVs was demonstrated was in 1982, Stegmayr & Ronquist. Maybe I’ll dig out the paper to cover it sometime).

So. In 2005, Baj-Krzyworzeka et al. (2006) looked at some human cancer cells to see what they spew out and for what purpose. They saw that the cancer cells were transferring some of the tumor proteins packaged in EVs to monocytes. For devious purposes, probably. And then they made to what it looks to me like a serious leap in reasoning: since the EVs contain tumor proteins, why wouldn’t they also contain the mRNA for those proteins? My first answer to that would have been: “because it would be rapidly degraded”. And I would have been wrong. To my credit, if the experiment wouldn’t take up too many resources I still would have done it, especially if I would have some random primers lying around the lab. Luckily for the world, I was not in charge with this particular experiment and Baj-Krzyworzeka et al. (2005) proceeded with a real-time PCR (polymerase chain reaction) which showed them that the EVs released by the tumor cells also contained mRNA.

Now the 1 million dollar, stare-in-your-face question was: is this mRNA functional? Meaning, once delivered to the host cell, would it be translated into protein?

Six months later the group answered it. Ratajcza et al. (2006) used embryonic stem cells as the donor cells and hematopoietic progenitor cells as host cells. First, they found out that if you let the donors spit EVs at the hosts, the hosts are faring much better (better survival, upregulated good genes, phosphorylated MAPK to induce proliferation etc.). Next, they looked at the contents of EVs and found out that they contained proteins and mRNA that promote those good things (Wnt-3 protein, mRNA for transcription factors etc.). Next, to make sure that the host cells don’t show this enrichment all of a sudden out of the goodness of their little pluripotent hearts but is instead due to the mRNA from the donor cells, the authors looked at the expression of one of the transcription factors (Oct-4) in the hosts. They used as host a cell line (SKL) that does not express the pluripotent marker Oct-4. So if the hosts express this protein, it must have come only from outside. Lo and behold, they did. This means that the mRNA carried by the EVs is functional (Fig. 2).

128-1 - Copy
Fig. 2. Cell-to-cell mRNA transfer via extracellular vesicles (EVs). DNA is translated into RNA. A portion of RNA is transcribed into protein and another portion remains untranscribed. Both resultant protein and mRNA can get packaged into a vesicle: either a repackage into a microvesicle (a budding off of the cell membrane that shuttles cargo to and forth, about the size of 100-300nm) or packaged in a newly formed exosome (<100 nm) inside a multivesicular endosome (the yellow circle). The cell releases these vesicles in the intercellular space. The vesicles dock onto the host cell’s membrane and empty their cargo.

What bugs me is that these papers came out in a period where I was doing some heavy reading. How did I miss this?! Probably because they were published in cancer journals, not my field. But this is big enough you’d think others would mention it. (If you’re a recurrent reader of my blog, by now you should be familiarized with my stream-of-consciousness writing and my admittedly sometimes annoying in-parenthesis-meta-cognitions :D). One cannot but wonder what other truly great discoveries are out there already that were missed. Frankly, I should probably be grateful to this blog – and my friend GT who made me do it – because without nosing outside my field in search of material for it I would have probably remained ignorant of this awesome discovery. So, even if this is a decade old discovery for you, for me is one day old and I am a bit giddy about it.

This is a big deal because in opens up not a new therapy, or a new therapy direction, or a new drug class, but a new DELIVERY METHOD, the Holy Grail of Pharmacopeia. You just put your drug in one of these vesicles and let nature take its course. Of course, there are all sorts of roadblocks to overcome, like specificity, toxicity, etc. Looks like some are already conquered as there are several clinical trials out there that take advantage of this mechanism and I bet there will be more.

Stop by tomorrow for a freshly published paper on this mechanism in neurons.

127 - Copy

REFERENCES:

1) Baj-Krzyworzeka M, Szatanek R, Weglarczyk K, Baran J, Urbanowicz B, Brański P, Ratajczak MZ, & Zembala M. (Jul. 2006, Epub 9 Nov 2005). Tumour-derived microvesicles carry several surface determinants and mRNA of tumour cells and transfer some of these determinants to monocytes. Cancer Immunology, Immunotherapy, 55(7):808-818. PMID: 16283305, DOI: 10.1007/s00262-005-0075-9. ARTICLE

2) Ratajczak J, Miekus K, Kucia M, Zhang J, Reca R, Dvorak P, & Ratajczak MZ (May 2006). Embryonic stem cell-derived microvesicles reprogram hematopoietic progenitors: evidence for horizontal transfer of mRNA and protein delivery. Leukemia, 20(5):847-856. PMID: 16453000, DOI: 10.1038/sj.leu.2404132. ARTICLE | FULLTEXT PDF 

Bibliography:

Raposo G & Stoorvogel W. (18 Feb. 2013). Extracellular vesicles: exosomes, microvesicles, and friends. The Journal of Cell Biology, 200(4):373-383. PMID: 23420871, PMCID: PMC3575529, DOI: 10.1083/jcb.201211138. ARTICLE | FULLTEXT PDF

By Neuronicus, 13 January 2018

Advertisements

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought s/he is kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCE: Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion, another cognitive bias (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is that MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others, a.k.a. the ‘above-average effect’.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps of the scanner. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data (meaning the answers to the questionnaires) showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding to its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram: Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except for a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is slightly incorrect. Because only in particular conditions this illusion of control applies, and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials that, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it aint’s that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Pic of the day: Skunky beer

120 skunky beer - Copy

 

REFERENCE: Burns CS, Heyerick A, De Keukeleire D, Forbes MD. (5 Nov 2001). Mechanism for formation of the lightstruck flavor in beer revealed by time-resolved electron paramagnetic resonance. Chemistry – The European Journal, 7(21): 4553-4561. PMID: 11757646, DOI: 10.1002/1521-3765(20011105)7:21<4553::AID-CHEM4553>3.0.CO;2-0. ABSTRACT

By Neuronicus, 12 July 2017

The FIRSTS: Increase in CO2 levels in the atmosphere results in global warming (1896)

Few people seem to know that although global warming and climate change are hotly debated topics right now (at least on the left side of the Atlantic) the effect of CO2 levels on the planet’s surface temperature was investigated and calculated more than a century ago. CO2 is one of the greenhouse gases responsible for the greenhouse effect, which was discovered by Joseph Fourier in 1824 (the effect, that is).

Let’s start with a terminology clarification. Whereas the term ‘global warming’ was coined by Wallace S. Broecker in 1975, the term ‘climate change’ underwent a more fluidic transformation in the ’70s from ‘inadvertent climate modification’ to ‘climatic change’ to a more consistent use of ‘climate change’ by Jule Charney in 1979, according to NASA. The same source tells us:

“Global warming refers to surface temperature increases, while climate change includes global warming and everything else that increasing greenhouse gas amounts will affect”.

But before NASA there was one Svante August Arrhenius (1859–1927). Dr. Arrhenius was a Swedish physical chemist who received the Nobel Prize in 1903 for uncovering the role of ions in how electrical current is conducted in chemical solutions.

S.A. Arrhenius was the first to quantify the variations of our planet’s surface temperature as a direct result of the amount of CO2 (which he calls carbonic acid, long story) present in the atmosphere. For those – admittedly few – nitpickers that say his views on the greenhouse effect were somewhat simplistic and his calculations were incorrect I’d say cut him a break: he didn’t have the incredible amount of data provided by the satellites or computers, nor the work of thousands of scientists over a century to back him up. Which they do. Kind of. Well, the idea, anyway, not the math. Well, some of the math. Let me explain.

First, let me tell you that I haven’t managed to pass past page 3 of the 39 pages of creative mathematics, densely packed tables, parameter assignments, and convoluted assumptions of Arrhenius (1896). Luckily, I convinced a spectroscopist to take a crack at the original paper since there is a lot of spectroscopy in it and then enlighten me.

118Boltzmann-grp - Copy
The photo was taken in 1887 and shows (standing, from the left): Walther Nernst (Nobel in Chemistry), Heinrich Streintz, Svante Arrhenius, Richard Hiecke; (sitting, from the left): Eduard Aulinger, Albert von Ettingshausen, Ludwig Boltzmann, Ignaz Klemenčič, Victor Hausmanninger. Source: Universität Graz. License: PD via Wikimedia Commons.

Second, despite his many accomplishments, including being credited with laying the foundations of a new field (physical chemistry), Arrhenius was first and foremost a mathematician. So he employed a lot of tedious mathematics (by hand!) together with some hefty guessing along with what was known at the time about Earth’s infrared radiation, solar radiation, water vapor and CO2 absorption, temperature of the Moon,  greenhouse effect, and some uncalibrated spectra taken by his predecessors to figure out if “the mean temperature of the ground [was] in any way influenced by the presence of the heat-absorbing gases in the atmosphere” (p. 237). Why was he interested in this? We find out only at page 267 after a lot of aforesaid dreary mathematics where he finally shares this with us:

“I certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions on the probable causes of the Ice Age”.

So Arrhenius was interested to find out if the fluctuations of CO2 levels could have caused the Ice Ages. And yes, he thinks that could have happened. I don’t know enough about climate science to tell you if this particular conclusion of his is correct today. But what he managed to accomplish though was to provide for the first time a way to mathematically calculate the amount of rise in temperature due the rise of CO2 levels. In other words, he found a direct relationship between the variations of CO2 and temperature.

Today, it turns out that his math was incorrect because he left out some other variables that influence the global temperature that were discovered and/or understood later (like the thickness of the atmosphere, the rate of ocean absorption  of CO2 and others which I won’t pretend I understand). Nevertheless, Arrhenius was the first to point out to the following relationship, which, by and large, is still relevant today:

“Thus if the quantity of carbonic acid increased in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression” (p. 267).

118 Arrhenius - Copy

P.S. Technically, Joseph Fourier should be credited with the discovery of global warming by means of increasing the levels of greenhouse gases in the atmosphere in 1824, but Arrhenius quantified it so I credited him. Feel fee to debate :).

REFERENCE: Arrhenius, S. (April 1896). XXXI. On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Fifth Series), 49 (251): 237-276. General Reference P.P.1433. doi: http://dx.doi.org/10.1080/14786449608620846. FREE FULLTEXT PDF

By Neuronicus, 24 June 2017

Arnica and a scientist’s frustrations

angry-1372523 - CopyWhen you’re the only scientist in the family you get asked the weirdest things. Actually, I’m not the only one, but the other one is a chemist and he’s mostly asked about astrophysics stuff, so he doesn’t really count, because I am the one who gets asked about rare diseases and medication side-effects and food advice. Never mind that I am a neuroscientist and I have professed repeatedly and quite loudly my minimum knowledge of everything from the neck down, all eyes turn to me when the new arthritis medication or the unexpected side-effects of that heart drug are being brought up. But, curiously, if I dare speak about brain stuff I get the looks that a thing the cat just dragged in gets. I guess everybody is an expert on how the brain works on account of having and using one, apparently. Everybody, but the actual neuroscience expert whose input on brain and behavior is to be tolerated and taken with a grain of salt at best, but whose opinion on stomach distress is of the utmost importance and must be listened to reverentially in utter silence [eyes roll].

So this is the background on which the following question was sprung on me: “Is arnica good for eczema?”. As always, being caught unawares by the sheer diversity of interests and afflictions my family and friends can have, I mumbled something about I don’t know what arnica is and said I will look it up.

This is an account of how I looked it up and what conclusions I arrived to or how a scientist tries to figure something out completely out of his or her field. First thing I did was to go on Wikipedia. Hold your horses, it was not about scientific information but for a first clarification step: is it a chemical, a drug, an insect, a plant maybe? I used to encourage my students to also use Wikipedia when they don’t have a clue what a word/concept/thing is. Kind of like a dictionary or a paper encyclopedia, if you will. To have a starting point. As a matter of fact Wikipedia is an online encyclopedia, right? Anyway, I found out that Arnica is a plant genus out of which one species, Arnica Montana, seems to be popular.

Then I went to the library. Luckily for me, the library can be accessed online from the comfort of my home and in my favorite pajamas in the incarnation of PubMed or Medline as it used to be affectionately called. It is the US National Library of Medicine maintained by the National Institutes of Health, a wonderful repository of scholarly papers (yeah, Google Scholar to PubMed is like the babbling of a two-year old to the Shakespearian sonnets; Google also has an agenda, which you won’t find on PubMed). Useful tip: when you look for a paper that is behind a paywall in Nature or Elsevier Journals or elsewhere, check the PubMed too because very few people seem to know that there is an obscure and incredibly helpful law saying that research paid by the US taxpayers should be available to the US taxpayer. A very sensible law passed only a few years ago that has the delightful effect of having FREE full text access to papers after a certain amount of months from publishing (look for the PMC icon in the upper right corner).

I searched for “arnica” and got almost 400 results. I sorted by “most recent”. The third hit was a review. I skimmed it and seemed to talk a lot about healing in homeopathy, at which point, naturally, I got a gloomy foreboding. But I persevered because one data point does not a trend make. Meaning that you need more than a paper – or a handful – to form an informed opinion. This line of thinking has been rewarded by the hit No. 14 in the search which had an interesting title in the sense that it was the first to hint to a mechanism through which this plant was having some effects. Mechanisms are important, they allow you to differentiate speculation from findings, so I always prefer papers that try to answer a “How?” question as opposed to the other kinds; whys are almost always speculative as they have a whiff of post factum rationalizations, whats are curious observations but, more often than not, a myriad factors can account for them, whens are an interesting hybrid between the whats and the hows – all interesting reads but for different purposes. Here is a hint: you want to publish in Nature or Science? Design an experiment that answers all the questions. Gone are the days when answering one question was enough to publish…

Digressions aside, the paper I am covering today sounds like a mechanism paper. Marzotto et al. (2016) cultured a particular line of human cells in a Petri dish destined to test the healing powers of Arnica montana. The experimental design seems simple enough: the control culture gets nothing and the experimental culture gets Arnica montana. Then, the authors check to see if there are differences in gene expressions between the two groups.

The authors applied different doses of Arnica montana to the cultures to see if the effects are dose-dependant. The doses used were… wait, bear with me, I’m not familiar with the system, it’s not metric. In the Methods, the authors say

Arnica m. was produced by Boiron Laboratoires (Lyon, France) according to the French Homeopathic pharmacopoeia and provided as a first centesimal dilution (Arnica m. 1c) of the hydroalcoholic extract (Mother Tincture, MT) in 30% ethanol/distilled water”.

Wait, what?! Centesimal… centesimal… wasn’t that the nothing-in-it scale from the pseudoscientific bull called homeopathy? Maybe I’m wrong, maybe there are some other uses for it and becomes clear later:

Arnica m. 1c was used to prepare the second centesimal dilution (Arnica m. 2c) by adding 50μl of 1c solution to 4.95ml of distilled ultra-pure water. Therefore, 2c corresponds to 10−4 of the MT”.

Holy Mother of God, this is worse than gibberish; this is voluntary misdirection, crap wrapped up in glitter, medieval tinkering sold as state-of-the-art 21st century science. Speaking of state-of-the-art, the authors submit their “doses” to a liquid chromatograph, a thin layer chromatograph, a double-beam spectrophotometer, a nanoparticle tracking analysis (?!) for what purposes I cannot fathom. On, no, I can: to sound science-y. To give credibility for the incredulous. To make money.

At which point I stopped reading the ridiculous nonsense and took a closer look at the authors and got hit with this:

“Competing Interests: The authors have declared that no competing interests exist. This study was funded by Boiron Laboratoires Lyon with a research agreement in partnership with University of Verona. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.”

No competing interests?? The biggest manufacturer of homeopathic crap in the world pays you to see if their product works and you have no competing interest? Maybe no other competing interests. There were some comments and replies to this paper after that, but it is all inconsequential because once you have faulty methods your results are irrelevant. Besides, the comments are from the same University, could be some internal feuding.

PLoS One, what have you done? You’re a peer-reviewed open access journal! What “peers” reviewed this paper and gave their ok for publication? Since when is homeopathy science?! What am I going to find that you publish next? Astrology? For shame… Give me that editor’s job because I am certain I can do better.

To wrap it up and tell you why I am so mad. The homeopathic scale system, that centesimal gibberish, is just that: gibberish. It is impossible to replicate this experiment without the product marketed by Boiron because nobody knows how much of the plant is in the dose, which parts of the plant, what kind of extract, or what concentration. So it’s like me handing you my special potion and telling you it makes warts disappear because it has parsley in it. But I don’t tell you my recipe, how much, if there anything else besides parsley in it, if I used the roots or only the leaves or anything. Now that, my friends, it’s not science, because science is REPLICABLE. Make no mistake: homeopathy is not science. Just like the rest of alternative medicine, homeopathy is a ruthless and dangerous business that is in sore need of lawmakers’ attention, like FDA or USDA. And for those who think this is a small paper, totally harmless, no impact, let me tell you that this paper had over 20,000 views.

I would have oh so much more to rant on. But enough. Rant over.

Oh, not yet. Lastly, I checked a few other papers about arnica and my answer to the eczema question is: “It’s possible but no, I don’t think so. I don’t know really, I couldn’t find any serious study about it and I gave up looking after I found a lot of homeopathic red flags”. The answer I will give my family member? “Not the product you have, no. Go to the doctors, the ones with MDs after their name and do what they tell you. In addition, I, the one with a PhD after my name, will tell you this for free because you’re family: rub the contents of this bottle only once a day – no more! – on the affected area and you will start seeing improvements in three days. Do not use elsewhere, it’s quite potent!” Because placebo works and at least my water vial is poison free.

117 - Copy

Reference: Marzotto M, Bonafini C, Olioso D, Baruzzi A, Bettinetti L, Di Leva F, Galbiati E, & Bellavite P (10 Nov 2016). Arnica montana Stimulates Extracellular Matrix Gene Expression in a Macrophage Cell Line Differentiated to Wound-Healing Phenotype. PLoS One, 11(11):e0166340. PMID: 27832158, PMCID: PMC5104438, DOI: 10.1371/journal.pone.0166340. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 10 June 2017

Save

Vanity and passion fruit

Ultraviolet irradiation exposure from our sun accelerates the skin aging, process called photoaging. It can even cause skin cancers. There has been some considerable research on how our beloved sun does that.

For example, one way the UV radiation leads to skin damage is by promoting the production of free radicals as reactive oxygen species (ROS), which do many bad things, like direct DNA damage. Another bad thing done by ROS is the upregulation of the mitogen-activated protein kinase (MAPK) signaling pathway which activates all sorts of transcription factors which, in turn, produce proteins that lead to collagen degradation and voilà, aged skin. I know I lost some of you at the MAPK point; you can think of MAPK as a massive proteinaceous hub, a multi-button console with many inputs and outputs. A very sensitive and incredibly complex hub that controls nearly all important aspects of cell function, with many feedback loops, so if you mess with it, cell Armageddon may be happening. Or nothing at all. It’s that complex.

But I digress. What MAPK is doing is less relevant for the paper I am introducing to you today than the fact that we have physiological markers for skin aging due to UV. Bravo et al. (2017) cultured human skin cells in a Petri dish, treated them with various concentrations of an extract of passion fruit (Passiflora tarminiana) and then bombarded them with UV (the B type, 280–315 nm). The authors made the extract themselves, is not something you just buy (yet).

The UV produced the expected damage, translated as increased matrix mettoproteinase-1 (MMP-1), collagenase, and ROS production and decreased procollagen. Pretreatment with passion fruit extract significantly mitigated these UV effects in a dose-dependant manner. The concentration of their concoction that worked best was 10 μg/mL. Then the authors did some more chemistry to figure out what in their concoction is responsible, or at least probably responsible, for the observed wonderful effects. The authors believe the procyianidins and flavonoids are the culprits because 1) they have been proven to be strong antioxidants before and 2) this plant has them in very high amounts.

Good news then for the antiaging cosmetics industry. Perhaps even for dermatologists and their patients.

113passion-copy

Reference: Bravo K, Duque L, Ferreres F, Moreno DA, & Osorio E. (EPUB ahead of print: 3 Feb 2017). Passiflora tarminiana fruits reduce UVB-induced photoaging in human skin fibroblasts. Journal of Photochemistry and Photobiology, 168: 78-88. PMID: 28189068, DOI: 10.1016/j.jphotobiol.2017.01.023. ARTICLE

By Neuronicus, 13 February 2017

Save

Aging and its 11 hippocampal genes

Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.

First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.

Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.

After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.

But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.

112hc-copy

These are the 11 genes: C3 (up), Cd74  (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.

I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.

Reference: Pardo J, Abba MC, Lacunza E, Francelle L, Morel GR, Outeiro TF, Goya RG. (13 Jan 2017, Epub ahead of print). Identification of a conserved gene signature associated with an exacerbated inflammatory environment in the hippocampus of aging rats. Hippocampus, doi: 10.1002/hipo.22703. ARTICLE

By Neuronicus, 25 January 2017

Save

Save