The FIRSTS: mRNA from one cell can travel to another cell and be translated there (2006)

I’m interrupting the series on cognitive biases (unskilled-and-unaware, superiority illusion, and depressive realism) to tell you that I admit it, I’m old. -Ish. Well, ok, I’m not that old. But this following paper made me feel that old. Because it invalidates some stuff I thought I knew about molecular cell biology. Mind totally blown.

It all started with a paper freshly published (two days ago) and that I’ll cover tomorrow. It’s about what the title says: mRNA can travel between cells packaged nicely in vesicles and once in a target cell can be made into protein there. I’ll explain – briefly! – why this is such a mind-blowing thing.

Central_dogma - Copy
Fig. 1. Illustration of the central dogma of biology: information transfer between DNA, RNA, and protein. Courtesy of Wikipedia, PD

We’ll start with the central dogma of molecular biology (specialists, please bear with me): the DNA is transcribed into RNA and the RNA is translated into protein (see Fig. 1). It is an oversimplification of the complexity of information flow in a biological system, but it’ll do for our purposes.

DNA needs to be transcribed into RNA because RNA is a much more flexible molecule and thus can do many things. So RNA is the traveling mule between DNA and the place where its information becomes protein, i.e. ribosome. Hence the name mRNA. Just kidding; m stands for messenger RNA (not that I will ever be able to call that ever again: muleRNA is stuck in my brain now).

There are many kinds of RNA: some don’t even get out of the nucleus, some are chopped and re-glued (alternative splicing), some decide which bits of DNA (genes) to be expressed, some are busy housekeepers and so on. Once an RNA has finished its business it is degraded in many inventive ways. It cannot leave the cell because it cannot cross the cell membrane. And that was that. Or so I’ve been taught.

Exceptions from the above were viruses whose ways of going from cell to cell are very clever. A virus is a stretch of nucleic acids (DNA and/or RNA) and some proteins encapsulated in a blob (capsid). Not a cell!

In the ’90s several groups were looking at some blobs (yes, most stuff in biology can be defined by the all-encompassing and enlightening term of ‘blob’) that cells spew out every now and then. These were termed extracellular vesicles (EV) for obvious reasons. Turned out that many kinds of cells were doing it and on a much more regular basis than previously thought. The contents of these EVs varied quite a bit, based on the type of cells studied. Proteins, mostly, and maybe some cytoplasmic debris. In the ’80s it was thought that this was one way for a cell to get rid of trash. But in 1982, Stegmayr & Ronquist showed that prostate cells release some EVs that result in sperm cell motility increase (Raposo & Stoorvogel, 2013) so, clearly, the EVs were more than trash. Soon it became evident that EVs was another way of cell-to-cell communication. (Note to self: the first time intercellular communication by EVs was demonstrated was in n 1982, Stegmayr & Ronquist. Maybe I’ll dig out the paper to cover it sometime).

So. In 2005, Baj-Krzyworzeka et al. (2006) looked at some human cancer cells to see what they spew out and for what purpose. They saw that the cancer cells were transferring some of the tumor proteins packaged in EVs to monocytes. For devious purposes, probably. And then they made to what it looks to me like a serious leap in reasoning: since the EVs contain tumor proteins, why wouldn’t they also contain the mRNA for those proteins? My first answer to that would have been: “because it would be rapidly degraded”. And I would have been wrong. To my credit, if the experiment wouldn’t take up too many resources I still would have done it, especially if I would have some random primers lying around the lab. Luckily for the world, I was not in charge with this particular experiment and Baj-Krzyworzeka et al. (2005) proceeded with a real-time PCR (polymerase chain reaction) which showed them that the EVs released by the tumor cells also contained mRNA.

Now the 1 million dollar, stare-in-your-face question was: is this mRNA functional? Meaning, once delivered to the host cell, would it be translated into protein?

Six months later the group answered it. Ratajcza et al. (2006) used embryonic stem cells as the donor cells and hematopoietic progenitor cells as host cells. First, they found out that if you let the donors spit EVs at the hosts, the hosts are faring much better (better survival, upregulated good genes, phosphorylated MAPK to induce proliferation etc.). Next, they looked at the contents of EVs and found out that they contained proteins and mRNA that promote those good things (Wnt-3 protein, mRNA for transcription factors etc.). Next, to make sure that the host cells don’t show this enrichment all of a sudden out of the goodness of their little pluripotent hearts but is instead due to the mRNA from the donor cells, the authors looked at the expression of one of the transcription factors (Oct-4) in the hosts. They used as host a cell line (SKL) that does not express the pluripotent marker Oct-4. So if the hosts express this protein, it must have come only from outside. Lo and behold, they did. This means that the mRNA carried by the EVs is functional (Fig. 2).

128-1 - Copy
Fig. 2. Cell-to-cell mRNA transfer via extracellular vesicles (EVs). DNA is translated into RNA. A portion of RNA is transcribed into protein and another portion remains untranscribed. Both resultant protein and mRNA can get packaged into a vesicle: either a repackage into a microvesicle (a budding off of the cell membrane that shuttles cargo to and forth, about the size of 100-300nm) or packaged in a newly formed exosome (<100 nm) inside a multivesicular endosome (the yellow circle). The cell releases these vesicles in the intercellular space. The vesicles dock onto the host cell’s membrane and empty their cargo.

What bugs me is that these papers came out in a period where I was doing some heavy reading. How did I miss this?! Probably because they were published in cancer journals, not my field. But this is big enough you’d think others would mention it. (If you’re a recurrent reader of my blog, by now you should be familiarized with my stream-of-consciousness writing and my admittedly sometimes annoying in-parenthesis-meta-cognitions :D). One cannot but wonder what other truly great discoveries are out there already that were missed. Frankly, I should probably be grateful to this blog – and my friend GT who made me do it – because without nosing outside my field in search of material for it I would have probably remained ignorant of this awesome discovery. So, even if this is a decade old discovery for you, for me is one day old and I am a bit giddy about it.

This is a big deal because in opens up not a new therapy, or a new therapy direction, or a new drug class, but a new DELIVERY METHOD, the Holy Grail of Pharmacopeia. You just put your drug in one of these vesicles and let nature take its course. Of course, there are all sorts of roadblocks to overcome, like specificity, toxicity, etc. Looks like some are already conquered as there are several clinical trials out there that take advantage of this mechanism and I bet there will be more.

Stop by tomorrow for a freshly published paper on this mechanism in neurons.

127 - Copy

REFERENCES:

1) Baj-Krzyworzeka M, Szatanek R, Weglarczyk K, Baran J, Urbanowicz B, Brański P, Ratajczak MZ, & Zembala M. (Jul. 2006, Epub 9 Nov 2005). Tumour-derived microvesicles carry several surface determinants and mRNA of tumour cells and transfer some of these determinants to monocytes. Cancer Immunology, Immunotherapy, 55(7):808-818. PMID: 16283305, DOI: 10.1007/s00262-005-0075-9. ARTICLE

2) Ratajczak J, Miekus K, Kucia M, Zhang J, Reca R, Dvorak P, & Ratajczak MZ (May 2006). Embryonic stem cell-derived microvesicles reprogram hematopoietic progenitors: evidence for horizontal transfer of mRNA and protein delivery. Leukemia, 20(5):847-856. PMID: 16453000, DOI: 10.1038/sj.leu.2404132. ARTICLE | FULLTEXT PDF 

Bibliography:

Raposo G & Stoorvogel W. (18 Feb. 2013). Extracellular vesicles: exosomes, microvesicles, and friends. The Journal of Cell Biology, 200(4):373-383. PMID: 23420871, PMCID: PMC3575529, DOI: 10.1083/jcb.201211138. ARTICLE | FULLTEXT PDF

By Neuronicus, 13 January 2018

Advertisements

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought s/he is kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCE: Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is slightly incorrect. Because only in particular conditions this illusion of control applies, and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

The FIRSTS: Dinosaurs and reputation (1842)

‘Dinosaur’ is a common noun in most languages of the Globe and, in its weak sense, it means “extinct big-sized reptile-like animal that lived a long-time ago”. The word has been in usage for so long that it can be used also for describing something “impractically large, out-of-date, or obsolete” (Merriam-Webster dictionary). “Dinosaur” is a composite of two ancient Greek words (“deinos”, “sauros”) and it means “terrible lizard”.

But, it turns out that the word hasn’t been in usage for so long, just for a mere 175 years. Sir Richard Owen, a paleontologist that dabbled in many disciplines, coined the term in 1842. Owen introduced the taxon Dinosauria as if it was always called thus, no fuss: “The present and concluding part of the Report on British Fossil Reptiles contains an account of the remains of the Crocodilian, Dinosaurian, Lacertian, Pterodactylian, Chelonian, Ophidian and Batrachian reptiles.” (p. 60). Only later in the Report does he tell us his paleontological reasons for the baptism, namely some anatomical features that distinguish dinosaurs from crocodiles and other reptiles.

“…The combination of such characters, some, as the sacral ones, altogether peculiar among Reptiles, others borrowed, as it were, from groups now distinct from each other, and all manifested by creatures far surpassing in size the largest of existing reptiles, will, it is presumed, be deemed sufficient ground for establishing a distinct tribe or sub-order of Saurian Reptiles, for which I would propose the name of Dinosauria.” (p.103)

At the time he was presenting this report to the British Association for the Advancement of Science, other giants of biology were running around the same halls, like Charles Darwin and Thomas Henry Huxley. Indisputably, Owen had a keen observational eye and a strong background in comparative anatomy that resulted in hundreds of published works, some of them excellent. That, in addition to establishing the British Museum of Natural History.

Therefore, Owen had reasons to be proud of his accomplishments and secure in his influence and legacy, and yet his contemporaries tell us that he was an absolutely vicious man, spiteful to the point of obsession, vengeful and extremely jealous of other people’s work. Apparently, he would steal the work of the younger people around him, never give credit, lie and cheat at every opportunity, and even write lengthy anonymous letters to various printed media to denigrate his contemporaries. He seemed to love his natal city of Lancaster and his family though (Wessels & Taylor, 2015).

121Richard-owen _PD
Sir Richard Owen (20 July 1804 – 18 December 1892). PD, courtesy of Wikipedia.

Owen had a particular hate for Darwin. They had been close friends for 20 years and then Darwin published the “Origin of Species”. The book quickly became widely read and talked about and then poof: vitriol and hate. Darwin himself said the only reason he could think of for Owen’s hatred was the popularity of the book.

Various biographies and monographers seem to agree on his unpleasant personality (see his entry in The Telegraph, Encyclopedia.com, Encylopaedia Britannica, BBC. On a side note, should you be concerned about your legacy and have the means to persuade The Times to write you an obituary, by all means, do so. In all the 8 pages of obituary written in 1896 you will not find a single blemish on the portrait of Sir Richard Owen.

This makes me ponder on the judgement of history based not on your work, but on your personality. As I said, the man contributed to science in more ways than just naming the dinosaur and having spats with Darwin. And yet it seems that his accomplishments are somewhat diminished by the way he treated others.

This reminded me of Nicolae Constantin Paulescu, a Romanian scientist who discovered insulin in 1916 (published in 1921). Yes, yes, I know all about the controversy with the Canadians that extracted and purified the insulin in 1922 and got the Nobel for it in 1923. Paulescu did the same, even if Paulescu’s “pancreatic extract” from a few years earlier was insufficiently purified; it still successfully lowered the glicemic index in dogs. He even obtained a patent for the “fabrication of pancrein” (his name for insulin, because he obtained it from the pancreas) in April 1922 from the Romanian Government (patent no. 6255). The Canadian team was aware of his work, but because it was published in French, they had a poor translation and they misunderstood his findings, so, technically, they didn’t steal anything. Or so they say. Feel free to feed the conspiracy mill. I personally don’t know, I haven’t looked at the original work to form an opinion because it is in French and my French is non-existent.

Annnywaaaay, whether or not Paulescu was the first in discovering the insulin is debatable, but few doubt that he should have shared the Nobel at least.

Rumor has it that Paulescu did not share the Nobel because he was a devout Nazi. His antisemitic writings are remarkably horrifying, even by the standards of the extreme right. That’s also why you won’t hear about him in medical textbooks or at various diabetes associations and gatherings. Yet millions of people worldwide may be alive today because of his work, at least partly.

How should we remember? Just the discoveries and accomplishments with no reference to the people behind them? Is remembering the same as honoring? “Clara cells” were lung cells discovered by the infamous Nazi anatomist Max Clara by dissecting prisoners without consent. They were renamed by the lung community “club cells” in 2013. We cannot get rid of the discovery, but we can rename the cells, so it doesn’t look like we honor him. I completely understand that. And yet I also don’t want to lose important pieces of history because of the atrocities (in the case of Nazis) or unsavory behavior (in the case of Owen) committed by our predecessors. I understand why the International Federation of Diabetes does not wish to give awards in the name of Paulescu or have a Special Paulescu lecture. Perhaps the Romanians should take down his busts and statues, too. But I don’t understand why (medical) history books should exclude him.

In other words, don’t honor the unsavories of history, but don’t forget them either. You never know what we – or the future generations – may learn by looking back at them and their actions.

123 - Copy.jpg

By Neuronicus, 19 October 2017

References:

1) Owen, R (1842). “Report on British Fossil Reptiles”. Part II. Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. p. 60–204. Google Books Fulltext 

2) “Eminent persons: Biographies reprinted from the Times, Vol V, 1891–1892 – Sir Richard Owen (Obituary)” (1896). Macmillan & Co., p. 291–299. Google Books Fulltext

3) Wessels Q & Taylor AM (28 Oct 2015). Anecdotes to the life and times of Sir Richard Owen (1804-1892) in Lancaster. Journal of Medical Biography. pii: 0967772015608053. PMID: 26512064, DOI: 10.1177/0967772015608053. ARTICLE

Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials that, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it aint’s that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Old chimpanzees get Alzheimer’s pathology

Alzheimer’s Disease (AD) is the most common type of dementia with a progression that can span decades. Its prevalence is increasing steadily, particularly in the western countries and Australia. So some researchers speculated that this particular disease might be specific to humans. For various reasons, either genetic, social, or environmental.

A fresh e-pub brings new evidence that Alzheimer’s might plague other primates as well. Edler et al. (2017) studied the brains of 20 old chimpanzees (Pan troglodytes) for a whole slew of Alzheimer’s pathology markers. More specifically, they looked for these markers in brain regions commonly affected by AD, like the prefrontal cortex, the midtemporal gyrus, and the hippocampus.

Alzheimer’s markers, like Tau and Aβ lesions, were present in the chimpanzees in an age-dependent manner. In other words, the older the chimp, the more severe the pathology.

Interestingly, all 20 animals displayed some form of Alzheimer’s pathology. This finding points to another speculation in the field which is: dementia is just part of normal aging. Meaning we would all get it, eventually, if we would live long enough; some people age younger and some age older, as it were. This hypothesis, however, is not favored by most researchers not the least because is currently unfalsifiable. The longest living humans do not show signs of dementia so how long is long enough, exactly? But, as the authors suggest, “Aβ deposition may be part of the normal aging process in chimpanzees” (p. 24).

Unfortunately, “the chimpanzees in this study did not participate in formal behavioral or cognitive testing” (p. 6). So we cannot say if the animals had AD. They had the pathological markers, yes, but we don’t know if they exhibited the disease as is not uncommon to find these markers in humans who did not display any behavioral or cognitive symptoms (Driscoll et al., 2006). In other words, one might have tau deposits but no dementia symptoms. Hence the title of my post: “Old chimpanzees get Alzheimer’s pathology” and not “Old chimpanzees get Alzheimer’s Disease”

Good paper, good methods and stats. And very useful because “chimpanzees share 100% sequence homology and all six tau isoforms with humans” (p. 4), meaning we have now a closer to us model of the disease so we can study it more, even if primate research has taken significant blows these days due to some highly vocal but thoroughly misguided groups. Anyway, the more we know about AD the closer we are of getting rid of it, hopefully. And, soon enough, the aforementioned misguided groups shall have to face old age too with all its indignities and my guess is that in a couple of decades or so there will be fresh money poured into aging diseases research, primates be damned.

121-chimps get Alz - Copy

REFERENCE: Edler MK, Sherwood CC, Meindl RS, Hopkins WD, Ely JJ, Erwin JM, Mufson EJ, Hof PR, & Raghanti MA. (EPUB July 31, 2017). Aged chimpanzees exhibit pathologic hallmarks of Alzheimer’s disease. Neurobiology of Aging, PII: S0197-4580(17)30239-7, DOI: http://dx.doi.org/10.1016/j.neurobiolaging.2017.07.006. ABSTRACT  | Kent State University press release

By Neuronicus, 23 August 2017

Save

Save

Midichlorians, midichloria, and mitochondria

Nathan Lo is an evolutionary biologist interested in creepy crawlies, i.e. arthropods. Well, he’s Australian, so I guess that comes with the territory (see what I did there?). While postdoc’ing, he and his colleagues published a paper (Sassera et al., 2006) that would seem boring for anybody without an interest in taxonomy, a truly under-appreciated field.

The paper describes a bacterium that is a parasite for the mitochondria of a tick species called Ixodes ricinus, the nasty bugger responsible for Lyme disease. The authors obtained a female tick from Berlin, Germany and let it feed on a hamster until it laid eggs. By using genetic sequencing (you can use kits these days to extract the DNA, do PCR, gels and cloning, pretty much everything), electron microscopy (real powerful microscopes) and phylogenetic analysis (using computer softwares to see how closely related some species are) the authors came to the conclusion that this parasite they were working on is a new species. So they named it. And below is the full account of the naming, from the horse’s mouth, as it were:

“In accordance with the guidelines of the International Committee of Systematic Bacteriology, unculturable bacteria should be classified as Candidatus (Murray & Stackebrandt, 1995). Thus we propose the name ‘Candidatus Midichloria mitochondrii’ for the novel bacterium. The genus name Midichloria (mi.di.chlo′ria. N.L. fem. n.) is derived from the midichlorians, organisms within the fictional Star Wars universe. Midichlorians are microscopic symbionts that reside within the cells of living things and ‘‘communicate with the Force’’. Star Wars creator George Lucas stated that the idea of the midichlorians is based on endosymbiotic theory. The word ‘midichlorian’ appears to be a blend of the words mitochondrion and chloroplast. The specific epithet, mitochondrii (mi.to′chon.drii. N.L. n. mitochondrium -i a mitochondrion; N.L. gen. n. mitochondrii of a mitochondrion), refers to the unique intramitochondrial lifestyle of this bacterium. ‘Candidatus M. mitochondrii’ belongs to the phylum Proteobacteria, to the class Alphaproteobacteria and to the order Rickettsiales. ‘Candidatus M. mitochondrii’ is assigned on the basis of the 16S rRNA (AJ566640) and gyrB gene sequences (AM159536)” (p. 2539).

George Lucas gave his blessing to the Christening (of course he did).

119-midi - Copy1 - Copy.jpg

Acknowledgements: Thanks go to Ms. BBD who prevented me from making a fool of myself (this time!) on the social media by pointing out to me that midichloria are real and that they are a mitochondrial parasite.

REFERENCE: Sassera D, Beninati T, Bandi C, Bouman EA, Sacchi L, Fabbi M, Lo N. (Nov. 2006). ‘Candidatus Midichloria mitochondrii’, an endosymbiont of the tick Ixodes ricinus with a unique intramitochondrial lifestyle. International Journal of Systematic and Evolutionary Microbiology, 56(Pt 11): 2535-2540. PMID: 17082386, DOI: 10.1099/ijs.0.64386-0. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 29 July 2017

Pic of the day: Skunky beer

120 skunky beer - Copy

 

REFERENCE: Burns CS, Heyerick A, De Keukeleire D, Forbes MD. (5 Nov 2001). Mechanism for formation of the lightstruck flavor in beer revealed by time-resolved electron paramagnetic resonance. Chemistry – The European Journal, 7(21): 4553-4561. PMID: 11757646, DOI: 10.1002/1521-3765(20011105)7:21<4553::AID-CHEM4553>3.0.CO;2-0. ABSTRACT

By Neuronicus, 12 July 2017

The FIRSTS: Increase in CO2 levels in the atmosphere results in global warming (1896)

Few people seem to know that although global warming and climate change are hotly debated topics right now (at least on the left side of the Atlantic) the effect of CO2 levels on the planet’s surface temperature was investigated and calculated more than a century ago. CO2 is one of the greenhouse gases responsible for the greenhouse effect, which was discovered by Joseph Fourier in 1824 (the effect, that is).

Let’s start with a terminology clarification. Whereas the term ‘global warming’ was coined by Wallace S. Broecker in 1975, the term ‘climate change’ underwent a more fluidic transformation in the ’70s from ‘inadvertent climate modification’ to ‘climatic change’ to a more consistent use of ‘climate change’ by Jule Charney in 1979, according to NASA. The same source tells us:

“Global warming refers to surface temperature increases, while climate change includes global warming and everything else that increasing greenhouse gas amounts will affect”.

But before NASA there was one Svante August Arrhenius (1859–1927). Dr. Arrhenius was a Swedish physical chemist who received the Nobel Prize in 1903 for uncovering the role of ions in how electrical current is conducted in chemical solutions.

S.A. Arrhenius was the first to quantify the variations of our planet’s surface temperature as a direct result of the amount of CO2 (which he calls carbonic acid, long story) present in the atmosphere. For those – admittedly few – nitpickers that say his views on the greenhouse effect were somewhat simplistic and his calculations were incorrect I’d say cut him a break: he didn’t have the incredible amount of data provided by the satellites or computers, nor the work of thousands of scientists over a century to back him up. Which they do. Kind of. Well, the idea, anyway, not the math. Well, some of the math. Let me explain.

First, let me tell you that I haven’t managed to pass past page 3 of the 39 pages of creative mathematics, densely packed tables, parameter assignments, and convoluted assumptions of Arrhenius (1896). Luckily, I convinced a spectroscopist to take a crack at the original paper since there is a lot of spectroscopy in it and then enlighten me.

118Boltzmann-grp - Copy
The photo was taken in 1887 and shows (standing, from the left): Walther Nernst (Nobel in Chemistry), Heinrich Streintz, Svante Arrhenius, Richard Hiecke; (sitting, from the left): Eduard Aulinger, Albert von Ettingshausen, Ludwig Boltzmann, Ignaz Klemenčič, Victor Hausmanninger. Source: Universität Graz. License: PD via Wikimedia Commons.

Second, despite his many accomplishments, including being credited with laying the foundations of a new field (physical chemistry), Arrhenius was first and foremost a mathematician. So he employed a lot of tedious mathematics (by hand!) together with some hefty guessing along with what was known at the time about Earth’s infrared radiation, solar radiation, water vapor and CO2 absorption, temperature of the Moon,  greenhouse effect, and some uncalibrated spectra taken by his predecessors to figure out if “the mean temperature of the ground [was] in any way influenced by the presence of the heat-absorbing gases in the atmosphere” (p. 237). Why was he interested in this? We find out only at page 267 after a lot of aforesaid dreary mathematics where he finally shares this with us:

“I certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions on the probable causes of the Ice Age”.

So Arrhenius was interested to find out if the fluctuations of CO2 levels could have caused the Ice Ages. And yes, he thinks that could have happened. I don’t know enough about climate science to tell you if this particular conclusion of his is correct today. But what he managed to accomplish though was to provide for the first time a way to mathematically calculate the amount of rise in temperature due the rise of CO2 levels. In other words, he found a direct relationship between the variations of CO2 and temperature.

Today, it turns out that his math was incorrect because he left out some other variables that influence the global temperature that were discovered and/or understood later (like the thickness of the atmosphere, the rate of ocean absorption  of CO2 and others which I won’t pretend I understand). Nevertheless, Arrhenius was the first to point out to the following relationship, which, by and large, is still relevant today:

“Thus if the quantity of carbonic acid increased in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression” (p. 267).

118 Arrhenius - Copy

P.S. Technically, Joseph Fourier should be credited with the discovery of global warming by means of increasing the levels of greenhouse gases in the atmosphere in 1824, but Arrhenius quantified it so I credited him. Feel fee to debate :).

REFERENCE: Arrhenius, S. (April 1896). XXXI. On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Fifth Series), 49 (251): 237-276. General Reference P.P.1433. doi: http://dx.doi.org/10.1080/14786449608620846. FREE FULLTEXT PDF

By Neuronicus, 24 June 2017