The FIRSTS: Lack of happy events in depression (2003)

My last post focused on depression and it reminded me of something that I keep telling my students and they all react with disbelief. Well, I tell them a lot of things to which they react with disbelief, to be sure, but this one I keep thinking it should not generate such disbelief. The thing is: depressed people perceive the same amount of negative events happening to them as healthy people, but far fewer positive ones. This seems to be counter-intuitive to non-professionals, who believe depressed people are sadder than normal and only see the half-empty side of the glass of life.

So I dug out the original paper who found this… finding. It’s not as old as you might think. Peeters et al. (2003) paid $30/capita to 86 people, 46 of which were diagnosed with Major Depressive Disorder and seeking treatment in a community mental health center or outpatient clinic (this is in Netherlands). None were taking antidepressants or any other drugs, except low-level anxiolytics. Each participant was given a wristwatch that beeped 10 times a day at semi-random intervals of approximately 90 min. When the watch beeped, the subjects had to complete a form within maximum 25 min answering questions about their mood, currents events, and their appraisal of those events. The experiment took 6 days, including weekend.

The results? Contrary to popular belief, people with depression “did not report more frequent negative events, although they did report fewer positive events and appraised both types of events as more stressful” (p. 208). In other words, depressed people are not seeing half-empty glasses all the time; instead, they don’t see the half-full glasses. Note that they regarded both negative and positive events as stressful. We circle back to the ‘stress is the root of all evil‘ thing.

I would have liked to see if the decrease in positive affect and perceived happy events correlates with increased sadness. The authors say that “negative events were appraised as more unpleasant, more important, and more stressful by the depressed than by the healthy participants ” (p. 206), but, curiously, the  mood was assessed with ratings on the feeling anxious, irritated, restless, tense, guilty, irritable, easily distracted, and agitate, and not a single item on depression-iconic feelings: sad, empty, hopeless, worthless.

Nevertheless, it’s a good psychological study with in depth statistical analyses. I also found thought-provoking this paragraph: “The literature on mood changes in daily life is dominated by studies of daily hassles. The current results indicate that daily uplifts are also important determinants of mood, in both depressed and healthy people” (p. 209).

152 depression and lack of happy events - Copy

REFERENCE: Peeters F, Nicolson NA, Berkhof J, Delespaul P, & deVries M. (May 2003). Effects of daily events on mood states in major depressive disorder. Journal of Abnormal Psychology, 112(2):203-11. PMID: 12784829, DOI: 10.1037/0021-843X.112.2.203. ARTICLE

By Neuronicus, 4 May 2019

Milk-producing spider

In biology, organizing living things in categories is called taxonomy. Such categories are established based on shared characteristics of the members. These characteristics were usually visual attributes. For example, a red-footed booby (it’s a bird, silly!) is obviously different than a blue-footed booby, so we put them in different categories, which Aristotle called in Greek something like species.

Biological taxonomy is very useful, not only to provide countless hours of fight (both verbal and physical!) for biologists, but to inform us of all sorts of unexpected relationships between living things. These relationships, in turn, can give us insights into our own evolution, but also the evolution of things inimical to us, like diseases, and, perhaps, their cure. Also extremely important, it allows scientists from all over the world to have a common language, thus maximizing information sharing and minimizing misunderstandings.

148-who Am I - Copy

All well and good. And it was all well and good since Carl Linnaeus introduced his famous taxonomy system in the 18th Century, the one we still use today with species, genus, family, order, and kingdom. Then we figured out how to map the DNAs of things around us and this information threw out the window a lot of Linnean classifications. Because it turns out that some things that look similar are not genetically similar; likewise, some living things that we thought are very different from one another, turned out that, genetically speaking, they are not so different.

You will say, then, alright, out with visual taxonomy, in with phylogenetic taxonomy. This would be absolutely peachy for a minority of organisms of the planet, like animals and plants, but a nightmare in the more promiscuous organisms who have no problem swapping bits of DNA back and forth, like some bacteria, so you don’t know anymore who’s who. And don’t even get me started on the viruses which we are still trying to figure out whether or not they are alive in the first place.

When I grew up there were 5 regna or kingdoms in our tree of life – Monera, Protista, Fungi, Plantae, Animalia – each with very distinctive characteristics. Likewise, the class Mammalia from the Animal Kingdom was characterized by the females feeding their offspring with milk from mammary glands. Period. No confusion. But now I have no idea (nor do many other biologists, rest assured) how many domains or kingdoms or empires we have, nor even what the definition of a species is anymore.

As if that’s not enough, even those Linnean characteristics that we thought set in stone are amenable to change. Which is good, shows the progress of science. But I didn’t think that something like the definition of mammal would change. Mammals are organisms whose females feed their offspring with milk from mammary glands, as I vouchsafed above. Pretty straightforward. And not spiders. Let me be clear on this: spiders did not feature in my – or anyone’s! – definition of mammals.

Until Chen et al. (2018) published their weird article a couple of weeks ago. The abstract is free for all to see and states that the females of a jumping spider species feed their young with milk secreted by their body until the age of subadulthood. Mothers continue to offer parental care past the maturity threshold. The milk is necessary for the spiderlings because without it they die. That’s all.

I read the whole paper since it was only 4 pages of it and here are some more details about their discovery. The species of spider they looked at is Toxeus magnus, a jumping spider that looks like an ant. The mother produces milk from her epigastric furrow and deposits it on the nest floor and walls from where the spiderlings ingest it (0-7 days). After the first week of this, the spiderlings suck the milk direct from the mother’s body and continue to do so for the next two weeks (7-20 days) when they start leaving the nest and forage for themselves. But they return and for the next period (20-40 days) they get their food both from the mother’s milk and from independent foraging. Spiderlings get weaned by day 40, but they still come home to sleep at night. At day 52 they are officially considered adults. Interestingly, “although the mother apparently treated all juveniles the same, only daughters were allowed to return to the breeding nest after sexual maturity. Adult sons were attacked if they tried to return. This may reduce inbreeding depression, which is considered to be a major selective agent for the evolution of mating systems (p. 1053).”

During all this time, including during the emergence into adulthood of the offsprings, the mother also supplied house maintenance, carrying out her children’s exuviae (shed exoskeletons) and repairing the nest.

The authors then did a series of experiments to see what role does the nursing and other maternal care at different stages play in the fitness and survival of the offsprings. Blocking the mother’s milk production with correction fluid immediately after hatching killed all the spiderlings, showing that they are completely dependent on the mother’s milk. Removing the mother after the spiderlings start foraging (day 20) drastically reduces survivorship and body size, showing that mother’s care is essential for her offsprings’ success. Moreover, the mother taking care of the nest and keeping it clean reduced the occurrence of parasite infections on the juveniles.

The authors analyzed the milk and it’s highly nutritious: “spider milk total sugar content was 2.0 mg/ml, total fat 5.3 mg/ml, and total protein 123.9 mg/ml, with the protein content around four times that of cow’s milk (p. 1053)”.

Speechless I am. Good for the spider, I guess. Spider milk will have exorbitant costs (Apparently, a slight finger pressure on the milk-secreting region makes the mother spider secret the milk, not at all unlike the human mother). Spiderlings die without the mother’s milk. Responsible farming? Spider milker qualifications? I’m gonna lay down, I got a headache.

148 spider milk - Copy

REFERENCE: Chen Z, Corlett RT, Jiao X, Liu SJ, Charles-Dominique T, Zhang S, Li H, Lai R, Long C, & Quan RC (30 Nov. 2018). Prolonged milk provisioning in a jumping spider. Science, 362(6418):1052-1055. PMID: 30498127, DOI: 10.1126/science.aat3692. ARTICLE | Supplemental info (check out the videos)

By Neuronicus, 13 December 2018

The FIRSTS: The cause(s) of dinosaur extinction

A few days ago, a follower of mine gave me an interesting read from The Atlantic regarding the dinosaur extinction. Like many of my generation, I was taught in school that dinosaurs died because an asteroid hit the Earth. That led to a nuclear winter (or a few years of ‘nuclear winters’) which killed the photosynthetic organisms, and then the herbivores didn’t have anything to eat so they died and then the carnivores didn’t have anything to eat and so they died. Or, as my 4-year-old puts it, “[in a solemn voice] after the asteroid hit, big dusty clouds blocked the sun; [in an ominous voice] each day was colder than the previous one and so, without sunlight to keep them alive [sad face, head cocked sideways], the poor dinosaurs could no longer survive [hands spread sideways, hung head] “. Yes, I am a proud parent. Now I have to do a sit-down with the child and explain that… What, exactly?

Well, The Atlantic article showcases the struggles of a scientist – paleontologist and geologist Gerta Keller – who doesn’t believe the mainstream asteroid hypothesis; rather she thinks there is enough evidence to point out that extreme volcano eruptions, like really extreme, thousands of times more powerful than anything we know in the recorded history, put out so much poison (soot, dust, hydrofluoric acid, sulfur, carbon dioxide, mercury, lead, and so on) in the atmosphere that, combined with the consequent dramatic climate change, killed the dinosaurs. The volcanoes were located in India and they erupted for hundreds of thousands of years, but most violent eruptions, Keller thinks, were in the last 40,000 years before the extinction. This hypothesis is called the Deccan volcanism from the region in India where these nasty volcanoes are located, first proposed by Vogt (1972) and Courtillot et al. (1986).

138- Vogt - Copy.jpg

So which is true? Or, rather, because this is science we’re talking about, which hypothesis is more supported by the facts: the volcanism or the impact?

The impact hypothesis was put forward in 1980 when Walter Alvarez, a geologist, noticed a thin layer of clay in rocks that were about 65 million years old, which coincided with the time when the dinosaurs disappeared. This layer is on the KT boundary (sometimes called K-T, K-Pg, or KPB, looks like the biologists are not the only ones with acronym problems) and marks the boundary between the Cretaceous and Paleogenic geological periods (T is for Triassic, yeah, I know). Walter asked his father, the famous Nobel Prize physicist Louis Alvarez, to take a look at it and see what it is. Alvarez Sr. analyzed it and decided that the clay contains a lot of iridium, dozens of times more than expected. After gathering more samples from Europe and New Zealand, they published a paper (Alvarez et al., 1980) in which the scientists reasoned that because Earth’s iridium is deeply buried in its bowels and not in its crust, this iridium at the K-Pg boundary is of extraterrestrial origin, which could be brought here only by an asteroid/comet. This is also the paper in which it was put forth for the first time the conjecture that the asteroid impact killed the dinosaurs, based on the uncanny coincidence of timing.

138-alvarez - Copy

The discovery of the Chicxulub crater in Mexico followed a more sinuous path because the geophysicists who first discovered it in the ’70s were working for an oil company, looking for places to drill. Once the dinosaur-died-due-to-asteroid-impact hypothesis gained popularity outside academia, the geologists and the physicists put two-and-two together, acquired more data, and published a paper (Hildebrand et al., 1991) where the Chicxulub crater was for the first time linked with the dinosaur extinction. Although the crater was not radiologically dated yet, they had enough geophysical, stratigraphic, and petrologic evidence to believe it was as old as the iridium layer and the dinosaur die-out.

138-chicxulub - Copy

But, devil is in the details, as they say. Keller published a paper in 2007 saying the Chicxulub event predates the extinction by some 300,000 years (Keller et al., 2007). She looked at geological samples from Texas and found the glass granule layer (indicator of the Chicxhulub impact) way below the K-Pg boundary. So what’s up with the iridium then? Keller (2014) believes that is not of extraterrestrial origin and it might well have been spewed up by a particularly nasty eruption or the sediments got shifted. Schulte et al. (2010), on the other hand, found high levels of iridium in 85 samples from all over the world in the KPG layer. Keller says that some other 260 samples don’t have iridium anomalies. As a response, Esmeray-Senlet et al. (2017) used some fancy Mass Spectrometry to show that the iridium profiles could have come only from Chicxulub, at least in North America. They argue that the variability in iridium profiles around the world is due to regional geochemical processes. And so on, and so on, the controversy continues.

Actual radioisotope dating was done a bit later in 2013: date of K-Pg is 66.043 ± 0.043 MA (millions of years ago), date of the Chicxulub crater is 66.038 ±.025/0.049 MA. Which means that the researchers “established synchrony between the Cretaceous-Paleogene boundary and associated mass extinctions with the Chicxulub bolide impact to within 32,000 years” (Renne et al., 2013), which is a blink of an eye in geological times.

138-66 chixhulub - Copy

Now I want you to understand that often in science, though by far not always, matters are not so simple as she is wrong, he is right. In geology, what matters most is the sample. If the sample is corrupted, so will be your conclusions. Maybe Keller’s or Renne’s samples were affected by a myriad possible variables, some as simple as shifting the dirt from here to there by who knows what event. After all, it’s been 66 million years since. Also, methods used are just as important and dating something that happened so long ago is extremely difficult due to intrinsic physical methodological limitations. Keller (2014), for example, claims that Renne couldn’t have possibly gotten such an exact estimation because he used Argon isotopes when only U-Pb isotope dilution–thermal ionization mass spectrometry (ID-TIMS) zircon geochronology could be so accurate. But yet again, it looks like he did use both, so… I dunno. As the over-used always-trite but nevertheless extremely important saying goes: more data is needed.

Even if the dating puts Chicxulub at the KPB, the volcanologists say that the asteroid, by itself, couldn’t have produced a mass extinction because there are other impacts of its size and they did not have such dire effects, but were barely noticeable at the biota scale. Besides, most of the other mass extinctions on the planet have been already associated with extreme volcanism (Archibald et al., 2010). On the other hand, the circumstances of this particular asteroid could have made it deadly: it landed in the hydrocarbon-rich areas that occupied only 13% of the Earth’s surface at the time which resulted in a lot of “stratospheric soot and sulfate aerosols and causing extreme global cooling and drought” (Kaiho & Oshima, 2017). Food for thought: this means that the chances of us, humans, to be here today are 13%!…

I hope that you do notice that these are very recent papers, so the issue is hotly debated as we speak.

It is possible, nay probable, that the Deccan volcanism, which was going on long before and after the extinction, was exacerbated by the impact. This is exactly what Renne’s team postulated in 2015 after dating the lava plains in the Deccan Traps: the eruptions intensified about 50,000 years before the KT boundary, from “high-frequency, low-volume eruptions to low-frequency, high-volume eruptions”, which is about when the asteroid hit. Also, the Deccan eruptions continued for about half a million years after KPB, “which is comparable with the time lag between the KPB and the initial stage of ecological recovery in marine ecosystems” (Renne et al., 2016, p. 78).

Since we cannot get much more accurate dating than we already have, perhaps the fossils can tell us whether the dinosaurs died abruptly or slowly. Because if they got extinct in a few years instead of over 50,000 years, that would point to a cataclysmic event. Yes, but which one, big asteroid or violent volcano? Aaaand, we’re back to square one.

Actually, the last papers on the matter points to two extinctions: the Deccan extinction and the Chicxulub extinction. Petersen et al., (2016) went all the way to Antarctica to find pristine samples. They noticed a sharp increase in global temperatures by about 7.8 ºC at the onset of Deccan volcanism. This climate change would surely lead to some extinctions, and this is exactly what they found: out of 24 species of marine animals investigated, 10 died-out at the onset of Deccan volcanism and the remaining 14 died-out when Chicxulub hit.

In conclusion, because this post is already verrrry long and is becoming a proper college review, to me, a not-a-geologist/paleontologist/physicist-but-still-a-scientist, things happened thusly: first Deccan traps erupted and that lead to a dramatic global warming coupled with spewing poison in the atmosphere. Which resulted in a massive die-out (about 200,000 years before the bolide impact, says a corroborating paper, Tobin, 2017). The surviving species (maybe half or more of the biota?) continued the best they could for the next few hundred thousand years in the hostile environment. Then the Chicxulub meteorite hit and the resulting megatsunami, the cloud of super-heated dust and soot, colossal wildfires and earthquakes, acid rain and climate cooling, not to mention the intensification of the Deccan traps eruptions, finished off the surviving species. It took Earth 300,000 to 500,000 years to recover its ecosystem. “This sequence of events may have combined into a ‘one-two punch’ that produced one of the largest mass extinctions in Earth history” (Petersen et al., 2016, p. 6).

138-timeline dinosaur - Copy

By Neuronicus, 25 August 2018

P. S. You, high school and college students who will use this for some class assignment or other, give credit thusly: Neuronicus (Aug. 26, 2018). The FIRSTS: The cause(s) of dinosaur extinction. Retrieved from on [date]. AND READ THE ORIGINAL PAPERS. Ask me for .pdfs if you don’t have access, although with sci-hub and all… not that I endorse any illegal and fraudulent use of the above mentioned server for the purpose of self-education and enlightenment in the quest for knowledge that all academics and scientists praise everywhere around the Globe!

EDIT March 29, 2019. Astounding one-of-a-kind discovery is being brought to print soon. It’s about a site in North Dakota that, reportedly, has preserved the day of the Chicxhulub impact in amazing detail, with tons of fossils of all kinds (flora, mammals, dinosaurs, fish) which seems to put the entire extinction of dinosaurs in one day, thus favoring the asteroid impact hypothesis. The data is not out yet. Can’t wait til it is! Actually, I’ll have to wait some more after it’s out for the experts to examine it and then I’ll find out. Until then, check the story of the discovery here and here.


1. Alvarez LW, Alvarez W, Asaro F, & Michel HV (6 Jun 1980). Extraterrestrial cause for the cretaceous-tertiary extinction. PMID: 17783054. DOI: 10.1126/science.208.4448.1095 Science, 208(4448):1095-1108. ABSTRACT | FULLTEXT PDF

2. Archibald JD, Clemens WA, Padian K, Rowe T, Macleod N, Barrett PM, Gale A, Holroyd P, Sues HD, Arens NC, Horner JR, Wilson GP, Goodwin MB, Brochu CA, Lofgren DL, Hurlbert SH, Hartman JH, Eberth DA, Wignall PB, Currie PJ, Weil A, Prasad GV, Dingus L, Courtillot V, Milner A, Milner A, Bajpai S, Ward DJ, Sahni A. (21 May 2010) Cretaceous extinctions: multiple causes. Science,328(5981):973; author reply 975-6. PMID: 20489004, DOI: 10.1126/science.328.5981.973-aScience. FULL REPLY

3. Courtillot V, Besse J, Vandamme D, Montigny R, Jaeger J-J, & Cappetta H (1986). Deccan flood basalts at the Cretaceous/Tertiary boundary? Earth and Planetary Science Letters, 80(3-4), 361–374. doi: 10.1016/0012-821x(86)90118-4. ABSTRACT

4. Esmeray-Senlet, S., Miller, K. G., Sherrell, R. M., Senlet, T., Vellekoop, J., & Brinkhuis, H. (2017). Iridium profiles and delivery across the Cretaceous/Paleogene boundary. Earth and Planetary Science Letters, 457, 117–126. doi:10.1016/j.epsl.2016.10.010. ABSTRACT

5. Hildebrand AR, Penfield GT, Kring DA, Pilkington M, Camargo AZ, Jacobsen SB, & Boynton WV (1 Sept. 1991). Chicxulub Crater: A possible Cretaceous/Tertiary boundary impact crater on the Yucatán Peninsula, Mexico. Geology, 19 (9): 867-871. DOI:<0867:CCAPCT>2.3.CO;2. ABSTRACT

6. Kaiho K & Oshima N (9 Nov 2017). Site of asteroid impact changed the history of life on Earth: the low probability of mass extinction. Scientific Reports,7(1):14855. PMID: 29123110, PMCID: PMC5680197, DOI:10.1038/s41598-017-14199-x. . ARTICLE | FREE FULLTEXT PDF

7. Keller G, Adatte T, Berner Z, Harting M, Baum G, Prauss M, Tantawy A, Stueben D (30 Mar 2007). Chicxulub impact predates K–T boundary: New evidence from Brazos, Texas, Earth and Planetary Science Letters, 255(3–4): 339-356. DOI: 10.1016/j.epsl.2006.12.026. ABSTRACT

8. Keller, G. (2014). Deccan volcanism, the Chicxulub impact, and the end-Cretaceous mass extinction: Coincidence? Cause and effect? Geological Society of America Special Papers, 505:57–89. doi:10.1130/2014.2505(03) ABSTRACT

9. Petersen SV, Dutton A, & Lohmann KC. (5 Jul 2016). End-Cretaceous extinction in Antarctica linked to both Deccan volcanism and meteorite impact via climate change. Nature Communications, 7:12079. doi: 10.1038/ncomms12079. PMID: 27377632, PMCID: PMC4935969, DOI: 10.1038/ncomms12079. ARTICLE | FREE FULLTEXT PDF 

10. Renne PR, Deino AL, Hilgen FJ, Kuiper KF, Mark DF, Mitchell WS 3rd, Morgan LE, Mundil R, & Smit J (8 Feb 2013). Time scales of critical events around the Cretaceous-Paleogene boundary. Science, 8;339(6120):684-687. doi: 10.1126/science.1230492. PMID: 23393261, DOI: 10.1126/science.1230492 ABSTRACT 

11. Renne PR, Sprain CJ, Richards MA, Self S, Vanderkluysen L, Pande K. (2 Oct 2015). State shift in Deccan volcanism at the Cretaceous-Paleogene boundary, possibly induced by impact. Science, 350(6256):76-8. PMID: 26430116. DOI: 10.1126/science.aac7549 ABSTRACT

12. Schoene B, Samperton KM, Eddy MP, Keller G, Adatte T, Bowring SA, Khadri SFR, & Gertsch B (2014). U-Pb geochronology of the Deccan Traps and relation to the end-Cretaceous mass extinction. Science, 347(6218), 182–184. doi:10.1126/science.aaa0118. ARTICLE

13. Schulte P, Alegret L, Arenillas I, Arz JA, Barton PJ, Bown PR, Bralower TJ, Christeson GL, Claeys P, Cockell CS, Collins GS, Deutsch A, Goldin TJ, Goto K, Grajales-Nishimura JM, Grieve RA, Gulick SP, Johnson KR, Kiessling W, Koeberl C, Kring DA, MacLeod KG, Matsui T, Melosh J, Montanari A, Morgan JV, Neal CR, Nichols DJ, Norris RD, Pierazzo E,Ravizza G, Rebolledo-Vieyra M, Reimold WU, Robin E, Salge T, Speijer RP, Sweet AR, Urrutia-Fucugauchi J, Vajda V, Whalen MT, Willumsen PS.(5 Mar 2010). The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. Science, 327(5970):1214-8. PMID: 20203042, DOI: 10.1126/science.1177265. ABSTRACT

14. Tobin TS (24 Nov 2017). Recognition of a likely two phased extinction at the K-Pg boundary in Antarctica. Scientific Reports, 7(1):16317. PMID: 29176556, PMCID: PMC5701184, DOI: 10.1038/s41598-017-16515-x. ARTICLE | FREE FULLTEXT PDF 

15. Vogt, PR (8 Dec 1972). Evidence for Global Synchronism in Mantle Plume Convection and Possible Significance for Geology. Nature, 240(5380), 338–342. doi:10.1038/240338a0 ABSTRACT

The FIRSTS: mRNA from one cell can travel to another cell and be translated there (2006)

I’m interrupting the series on cognitive biases (unskilled-and-unaware, superiority illusion, and depressive realism) to tell you that I admit it, I’m old. -Ish. Well, ok, I’m not that old. But this following paper made me feel that old. Because it invalidates some stuff I thought I knew about molecular cell biology. Mind totally blown.

It all started with a paper freshly published two days ago and that I’ll cover tomorrow. It’s about what the title says: mRNA can travel between cells packaged nicely in vesicles and once in a target cell can be made into protein there. I’ll explain – briefly! – why this is such a mind-blowing thing.

Central_dogma - Copy
Fig. 1. Illustration of the central dogma of biology: information transfer between DNA, RNA, and protein. Courtesy of Wikipedia, PD

We’ll start with the central dogma of molecular biology (specialists, please bear with me): the DNA is transcribed into RNA and the RNA is translated into protein (see Fig. 1). It is an oversimplification of the complexity of information flow in a biological system, but it’ll do for our purposes.

DNA needs to be transcribed into RNA because RNA is a much more flexible molecule and thus can do many things. So RNA is the traveling mule between DNA and the place where its information becomes protein, i.e. ribosome. Hence the name mRNA. Just kidding; m stands for messenger RNA (not that I will ever be able to call that ever again: muleRNA is stuck in my brain now).

There are many kinds of RNA: some don’t even get out of the nucleus, some are chopped and re-glued (alternative splicing), some decide which bits of DNA (genes) are to be expressed, some are busy housekeepers and so on. Once an RNA has finished its business it is degraded in many inventive ways. It cannot leave the cell because it cannot cross the cell membrane. And that was that. Or so I’ve been taught.

Exceptions from the above were viruses whose ways of going from cell to cell are very clever. A virus is a stretch of nucleic acids (DNA and/or RNA) and some proteins encapsulated in a blob (capsid). Not a cell!

In the ’90s several groups were looking at some blobs (yes, most stuff in biology can be defined by the all-encompassing and enlightening term of ‘blob’) that cells spew out every now and then. These were termed extracellular vesicles (EV) for obvious reasons. Turned out that many kinds of cells were doing it and on a much more regular basis than previously thought. The contents of these EVs varied quite a bit, based on the type of cells studied. Proteins, mostly, and maybe some cytoplasmic debris. In the ’80s it was thought that this was one way for a cell to get rid of trash. But in 1982, Stegmayr & Ronquist showed that prostate cells release some EVs that result in sperm cell motility increase (Raposo & Stoorvogel, 2013) so, clearly, the EVs were more than trash. Soon it became evident that EVs were another way of cell-to-cell communication. (Note to self: the first time intercellular communication by EVs was demonstrated was in 1982, Stegmayr & Ronquist. Maybe I’ll dig out the paper to cover it sometime).

So. In 2005, Baj-Krzyworzeka et al. (2006) looked at some human cancer cells to see what they spew out and for what purpose. They saw that the cancer cells were transferring some of the tumor proteins packaged in EVs to monocytes. For devious purposes, probably. And then they made to what it looks to me like a serious leap in reasoning: since the EVs contain tumor proteins, why wouldn’t they also contain the mRNA for those proteins? My first answer to that would have been: “because it would be rapidly degraded”. And I would have been wrong. To my credit, if the experiment wouldn’t take up too many resources I still would have done it, especially if I would have some random primers lying around the lab. Luckily for the world, I was not in charge with this particular experiment and Baj-Krzyworzeka et al. (2005) proceeded with a real-time PCR (polymerase chain reaction) which showed them that the EVs released by the tumor cells also contained mRNA.

Now the 1 million dollar, stare-in-your-face question was: is this mRNA functional? Meaning, once delivered to the host cell, would it be translated into protein?

Six months later the group answered it. Ratajcza et al. (2006) used embryonic stem cells as the donor cells and hematopoietic progenitor cells as host cells. First, they found out that if you let the donors spit EVs at the hosts, the hosts are faring much better (better survival, upregulated good genes, phosphorylated MAPK to induce proliferation etc.). Next, they looked at the contents of EVs and found out that they contained proteins and mRNA that promote those good things (Wnt-3 protein, mRNA for transcription factors etc.). Next, to make sure that the host cells don’t show this enrichment all of a sudden out of the goodness of their little pluripotent hearts but is instead due to the mRNA from the donor cells, the authors looked at the expression of one of the transcription factors (Oct-4) in the hosts. They used as host a cell line (SKL) that does not express the pluripotent marker Oct-4. So if the hosts express this protein, it must have come only from outside. Lo and behold, they did. This means that the mRNA carried by the EVs is functional (Fig. 2).

128-1 - Copy
Fig. 2. Cell-to-cell mRNA transfer via extracellular vesicles (EVs). DNA is translated into RNA. A portion of RNA is transcribed into protein and another portion remains untranscribed. Both resultant protein and mRNA can get packaged into a vesicle: either a repackage into a microvesicle (a budding off of the cell membrane that shuttles cargo to and forth, about the size of 100-300nm) or packaged in a newly formed exosome (<100 nm) inside a multivesicular endosome (the yellow circle). The cell releases these vesicles in the intercellular space. The vesicles dock onto the host cell’s membrane and empty their cargo.

What bugs me is that these papers came out in a period where I was doing some heavy reading. How did I miss this?! Probably because they were published in cancer journals, not my field. But this is big enough you’d think others would mention it. (If you’re a recurrent reader of my blog, by now you should be familiarized with my stream-of-consciousness writing and my admittedly sometimes annoying in-parenthesis-meta-cognitions :D). So how did I miss this? How many more great discoveries have I missed? Am I the only one to discover such fundamental gaps in my knowledge? And thus the imposter syndrome takes root.

Just kidding, I don’t have the imposter syndrome. If anything, I got a superiority illusion complex. And I am absolutely sure that many, many scientists read things they consider fundamental to their way of thinking about the world all the time and wonder what other truly great discoveries are out there already that they missed.

Frankly, I should probably be grateful to this blog – and my friend GT who made me do it – because without nosing outside my field in search of material for it I would have probably remained ignorant of this awesome discovery. So, even if this is a decade old discovery for you, for me is one day old and I am a bit giddy about it.

This is a big deal because of the theoretical implications: a cell’s transcriptome (all the mRNA expressed in a cell) varies not only due to its needs, activity, and experiences, but also due to its neighbors’! A cell is, more or less, its transcriptome. Soooo… if we can change that at will, does that means we can change the type or function of the cell too? There are so many questions that such a discovery raises! And possibilities.

This is also a big deal because it opens up not a new therapy, or a new therapy direction, or a new drug class, but a new DELIVERY METHOD, the Holy Grail of Pharmacopeia. You just put your drug in one of these vesicles and let nature take its course. Of course, there are all sorts of roadblocks to overcome, like specificity, toxicity, etc. Looks like some are already conquered as there are several clinical trials out there that take advantage of this mechanism and I bet there will be more.

Stop by tomorrow for a freshly published paper on this mechanism in neurons.

127 - Copy


1) Baj-Krzyworzeka M, Szatanek R, Weglarczyk K, Baran J, Urbanowicz B, Brański P, Ratajczak MZ, & Zembala M. (Jul. 2006, Epub 9 Nov 2005). Tumour-derived microvesicles carry several surface determinants and mRNA of tumour cells and transfer some of these determinants to monocytes. Cancer Immunology, Immunotherapy, 55(7):808-818. PMID: 16283305, DOI: 10.1007/s00262-005-0075-9. ARTICLE

2) Ratajczak J, Miekus K, Kucia M, Zhang J, Reca R, Dvorak P, & Ratajczak MZ (May 2006). Embryonic stem cell-derived microvesicles reprogram hematopoietic progenitors: evidence for horizontal transfer of mRNA and protein delivery. Leukemia, 20(5):847-856. PMID: 16453000, DOI: 10.1038/sj.leu.2404132. ARTICLE | FREE FULLTEXT PDF 


Raposo G & Stoorvogel W. (18 Feb. 2013). Extracellular vesicles: exosomes, microvesicles, and friends. The Journal of Cell Biology, 200(4):373-383. PMID: 23420871, PMCID: PMC3575529, DOI: 10.1083/jcb.201211138. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 13 January 2018

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).


Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.


1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is correct only in certain circumstances. Because only in particular conditions this illusion of control applies and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening. But, by and large, it does seem that depression clears the fog a bit.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

The FIRSTS: Dinosaurs and reputation (1842)

‘Dinosaur’ is a common noun in most languages of the Globe and, in its weak sense, it means “extinct huge reptile-like animal that lived a long-time ago”. The word has been in usage for so long that it can be used also for describing something “impractically large, out-of-date, or obsolete” (Merriam-Webster dictionary). “Dinosaur” is a composite of two ancient Greek words (“deinos”, “sauros”) and it means “terrible lizard”.

But, it turns out that the word hasn’t been in usage for so long, just for a mere 175 years. Sir Richard Owen, a paleontologist that dabbled in many disciplines, coined the term in 1842. Owen introduced the taxon Dinosauria as if it was always called thus, no fuss: “The present and concluding part of the Report on British Fossil Reptiles contains an account of the remains of the Crocodilian, Dinosaurian, Lacertian, Pterodactylian, Chelonian, Ophidian and Batrachian reptiles.” (p. 60). Only later in the Report does he tell us his paleontological reasons for the baptism, namely some anatomical features that distinguish dinosaurs from crocodiles and other reptiles.

“…The combination of such characters, some, as the sacral ones, altogether peculiar among Reptiles, others borrowed, as it were, from groups now distinct from each other, and all manifested by creatures far surpassing in size the largest of existing reptiles, will, it is presumed, be deemed sufficient ground for establishing a distinct tribe or sub-order of Saurian Reptiles, for which I would propose the name of Dinosauria.” (p.103)

At the time he was presenting this report to the British Association for the Advancement of Science, other giants of biology were running around the same halls, like Charles Darwin and Thomas Henry Huxley. Indisputably, Owen had a keen observational eye and a strong background in comparative anatomy that resulted in hundreds of published works, some of them excellent. That, in addition to establishing the British Museum of Natural History.

Therefore, Owen had reasons to be proud of his accomplishments and secure in his influence and legacy, and yet his contemporaries tell us that he was an absolutely vicious man, spiteful to the point of obsession, vengeful and extremely jealous of other people’s work. Apparently, he would steal the work of the younger people around him, never give credit, lie and cheat at every opportunity, and even write lengthy anonymous letters to various printed media to denigrate his contemporaries. He seemed to love his natal city of Lancaster and his family though (Wessels & Taylor, 2015).

121Richard-owen _PD
Sir Richard Owen (20 July 1804 – 18 December 1892). PD, courtesy of Wikipedia.

Owen had a particular hate for Darwin. They had been close friends for 20 years and then Darwin published the “Origin of Species”. The book quickly became widely read and talked about and then poof: vitriol and hate. Darwin himself said the only reason he could think of for Owen’s hatred was the popularity of the book.

Various biographies and monographers seem to agree on his unpleasant personality (see his entry in The Telegraph,, Encylopaedia Britannica, BBC). On a side note, should you be concerned about your legacy and have the means to persuade The Times to write you an obituary, by all means, do so. In all the 8 pages of obituary written in 1896 you will not find a single blemish on the portrait of Sir Richard Owen.

This makes me ponder on the judgement of history based not on your work, but on your personality. As I said, the man contributed to science in more ways than just naming the dinosaur and having spats with Darwin. And yet it seems that his accomplishments are somewhat diminished by the way he treated others.

This reminded me of Nicolae Constantin Paulescu, a Romanian scientist who discovered insulin in 1916 (published in 1921). Yes, yes, I know all about the controversy with the Canadians that extracted and purified the insulin in 1922 and got the Nobel for it in 1923. Paulescu did the same, even if Paulescu’s “pancreatic extract” from a few years earlier was insufficiently purified; it still successfully lowered the glicemic index in dogs. He even obtained a patent for the “fabrication of pancrein” (his name for insulin, because he obtained it from the pancreas) in April 1922 from the Romanian Government (patent no. 6255). The Canadian team was aware of his work, but because it was published in French, they had a poor translation and they misunderstood his findings, so, technically, they didn’t steal anything. Or so they say. Feel free to feed the conspiracy mill. I personally don’t know, I haven’t looked at the original work to form an opinion because it is in French and my French is non-existent.

Annnywaaaay, whether or not Paulescu was the first in discovering the insulin is debatable, but few doubt that he should have shared the Nobel at least.

Rumor has it that Paulescu did not share the Nobel because he was a devout Nazi. His antisemitic writings are remarkably horrifying, even by the standards of the extreme right. That’s also why you won’t hear about him in medical textbooks or at various diabetes associations and gatherings. Yet millions of people worldwide may be alive today because of his work, at least partly.

How should we remember? Just the discoveries and accomplishments with no reference to the people behind them? Is remembering the same as honoring? “Clara cells” were lung cells discovered by the infamous Nazi anatomist Max Clara by dissecting prisoners without consent. They were renamed by the lung community “club cells” in 2013. We cannot get rid of the discovery, but we can rename the cells, so it doesn’t look like we honor him. I completely understand that. And yet I also don’t want to lose important pieces of history because of the atrocities (in the case of Nazis) or unsavory behavior (in the case of Owen) committed by our predecessors. I understand why the International Federation of Diabetes does not wish to give awards in the name of Paulescu or have a Special Paulescu lecture. Perhaps the Romanians should take down his busts and statues, too. But I don’t understand why (medical) history books should exclude him.

In other words, don’t honor the unsavories of history, but don’t forget them either. You never know what we – or the future generations – may learn by looking back at them and their actions.

123 - Copy.jpg

By Neuronicus, 19 October 2017


1) Owen, R (1842). “Report on British Fossil Reptiles”. Part II. Report of the Eleventh Meeting of the British Association for the Advancement of Science; Held at Plymouth in July 1841. London: John Murray. p. 60–204. Google Books Fulltext 

2) “Eminent persons: Biographies reprinted from the Times, Vol V, 1891–1892 – Sir Richard Owen (Obituary)” (1896). Macmillan & Co., p. 291–299. Google Books Fulltext

3) Wessels Q & Taylor AM (28 Oct 2015). Anecdotes to the life and times of Sir Richard Owen (1804-1892) in Lancaster. Journal of Medical Biography. pii: 0967772015608053. PMID: 26512064, DOI: 10.1177/0967772015608053. ARTICLE

Midichlorians, midichloria, and mitochondria

Nathan Lo is an evolutionary biologist interested in creepy crawlies, i.e. arthropods. Well, he’s Australian, so I guess that comes with the territory (see what I did there?). While postdoc’ing, he and his colleagues published a paper (Sassera et al., 2006) that would seem boring for anybody without an interest in taxonomy, a truly under-appreciated field.

The paper describes a bacterium that is a parasite for the mitochondria of a tick species called Ixodes ricinus, the nasty bugger responsible for Lyme disease. The authors obtained a female tick from Berlin, Germany and let it feed on a hamster until it laid eggs. By using genetic sequencing (you can use kits these days to extract the DNA, do PCR, gels and cloning, pretty much everything), electron microscopy (real powerful microscopes) and phylogenetic analysis (using computer softwares to see how closely related some species are) the authors came to the conclusion that this parasite they were working on is a new species. So they named it. And below is the full account of the naming, from the horse’s mouth, as it were:

“In accordance with the guidelines of the International Committee of Systematic Bacteriology, unculturable bacteria should be classified as Candidatus (Murray & Stackebrandt, 1995). Thus we propose the name ‘Candidatus Midichloria mitochondrii’ for the novel bacterium. The genus name Midichloria (mi.di.chlo′ria. N.L. fem. n.) is derived from the midichlorians, organisms within the fictional Star Wars universe. Midichlorians are microscopic symbionts that reside within the cells of living things and ‘‘communicate with the Force’’. Star Wars creator George Lucas stated that the idea of the midichlorians is based on endosymbiotic theory. The word ‘midichlorian’ appears to be a blend of the words mitochondrion and chloroplast. The specific epithet, mitochondrii (′chon.drii. N.L. n. mitochondrium -i a mitochondrion; N.L. gen. n. mitochondrii of a mitochondrion), refers to the unique intramitochondrial lifestyle of this bacterium. ‘Candidatus M. mitochondrii’ belongs to the phylum Proteobacteria, to the class Alphaproteobacteria and to the order Rickettsiales. ‘Candidatus M. mitochondrii’ is assigned on the basis of the 16S rRNA (AJ566640) and gyrB gene sequences (AM159536)” (p. 2539).

George Lucas gave his blessing to the Christening (of course he did).

119-midi - Copy1 - Copy.jpg

Acknowledgements: Thanks go to Ms. BBD who prevented me from making a fool of myself (this time!) on the social media by pointing out to me that midichloria are real and that they are a mitochondrial parasite.

REFERENCE: Sassera D, Beninati T, Bandi C, Bouman EA, Sacchi L, Fabbi M, Lo N. (Nov. 2006). ‘Candidatus Midichloria mitochondrii’, an endosymbiont of the tick Ixodes ricinus with a unique intramitochondrial lifestyle. International Journal of Systematic and Evolutionary Microbiology, 56(Pt 11): 2535-2540. PMID: 17082386, DOI: 10.1099/ijs.0.64386-0. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 29 July 2017

The FIRSTS: Increase in CO2 levels in the atmosphere results in global warming (1896)

Few people seem to know that although global warming and climate change are hotly debated topics right now (at least on the left side of the Atlantic) the effect of CO2 levels on the planet’s surface temperature was investigated and calculated more than a century ago. CO2 is one of the greenhouse gases responsible for the greenhouse effect, which was discovered by Joseph Fourier in 1824 (the effect, that is).

Let’s start with a terminology clarification. Whereas the term ‘global warming’ was coined by Wallace S. Broecker in 1975, the term ‘climate change’ underwent a more fluidic transformation in the ’70s from ‘inadvertent climate modification’ to ‘climatic change’ to a more consistent use of ‘climate change’ by Jule Charney in 1979, according to NASA. The same source tells us:

“Global warming refers to surface temperature increases, while climate change includes global warming and everything else that increasing greenhouse gas amounts will affect”.

But before NASA there was one Svante August Arrhenius (1859–1927). Dr. Arrhenius was a Swedish physical chemist who received the Nobel Prize in 1903 for uncovering the role of ions in how electrical current is conducted in chemical solutions.

S.A. Arrhenius was the first to quantify the variations of our planet’s surface temperature as a direct result of the amount of CO2 (which he calls carbonic acid, long story) present in the atmosphere. For those – admittedly few – nitpickers that say his views on the greenhouse effect were somewhat simplistic and his calculations were incorrect I’d say cut him a break: he didn’t have the incredible amount of data provided by the satellites or computers, nor the work of thousands of scientists over a century to back him up. Which they do. Kind of. Well, the idea, anyway, not the math. Well, some of the math. Let me explain.

First, let me tell you that I haven’t managed to pass past page 3 of the 39 pages of creative mathematics, densely packed tables, parameter assignments, and convoluted assumptions of Arrhenius (1896). Luckily, I convinced a spectroscopist to take a crack at the original paper since there is a lot of spectroscopy in it and then enlighten me.

118Boltzmann-grp - Copy
The photo was taken in 1887 and shows (standing, from the left): Walther Nernst (Nobel in Chemistry), Heinrich Streintz, Svante Arrhenius, Richard Hiecke; (sitting, from the left): Eduard Aulinger, Albert von Ettingshausen, Ludwig Boltzmann, Ignaz Klemenčič, Victor Hausmanninger. Source: Universität Graz. License: PD via Wikimedia Commons.

Second, despite his many accomplishments, including being credited with laying the foundations of a new field (physical chemistry), Arrhenius was first and foremost a mathematician. So he employed a lot of tedious mathematics (by hand!) together with some hefty guessing along with what was known at the time about Earth’s infrared radiation, solar radiation, water vapor and CO2 absorption, temperature of the Moon,  greenhouse effect, and some uncalibrated spectra taken by his predecessors to figure out if “the mean temperature of the ground [was] in any way influenced by the presence of the heat-absorbing gases in the atmosphere” (p. 237). Why was he interested in this? We find out only at page 267 after a lot of aforesaid dreary mathematics where he finally shares this with us:

“I certainly not have undertaken these tedious calculations if an extraordinary interest had not been connected with them. In the Physical Society of Stockholm there have been occasionally very lively discussions on the probable causes of the Ice Age”.

So Arrhenius was interested to find out if the fluctuations of CO2 levels could have caused the Ice Ages. And yes, he thinks that could have happened. I don’t know enough about climate science to tell you if this particular conclusion of his is correct today. But what he managed to accomplish though was to provide for the first time a way to mathematically calculate the amount of rise in temperature due the rise of CO2 levels. In other words, he found a direct relationship between the variations of CO2 and temperature.

Today, it turns out that his math was incorrect because he left out some other variables that influence the global temperature that were discovered and/or understood later (like the thickness of the atmosphere, the rate of ocean absorption  of CO2 and others which I won’t pretend I understand). Nevertheless, Arrhenius was the first to point out to the following relationship, which, by and large, is still relevant today:

“Thus if the quantity of carbonic acid increased in geometric progression, the augmentation of the temperature will increase nearly in arithmetic progression” (p. 267).

118 Arrhenius - Copy

P.S. Technically, Joseph Fourier should be credited with the discovery of global warming by means of increasing the levels of greenhouse gases in the atmosphere in 1824, but Arrhenius quantified it so I credited him. Feel fee to debate :).

REFERENCE: Arrhenius, S. (April 1896). XXXI. On the Influence of Carbonic Acid in the Air upon the Temperature of the Ground, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science (Fifth Series), 49 (251): 237-276. General Reference P.P.1433. doi: FREE FULLTEXT PDF

By Neuronicus, 24 June 2017

The FIRSTS: Magnolia (1703)

It is April and the Northern Hemisphere is enjoying the sight and smell of blooming magnolias. Fittingly, today is the birthday of the man who described and named the genus. Charles Plumier (20 April 1646 – 20 November 1704) was a French botanist known for describing many plant genera and for preceding Linnaeus in botanical taxonomy. His (Plumier’s) taxonomy was later incorporated by Linnaeus and is still in use today.

Plumier traveled a lot as part of his job as Royal Botanist at the court of Louis XIV. Don’t envy him too much though because the monk order to which he belonged, the Minims, forced him to be a vegan, living mostly on lentil.

Among thousands of other plants described was the magnolia, a genus of gorgeous ornamental flowering trees that put out spectacularly big flowers in the Spring, usually before the leaves come out. Plumier found it on the island of Martinique and named it after Pierre Magnol, a contemporary botanist who invented the concept of family as a distinct taxonomical category.

plate 1703 - Copy
Excerpts from the pages 38, 39 and plate 7 from Nova Plantarum Americanum Genera by Charles Plumier (Paris, 1703) describing the genus Magnolia.

Interestingly enough, Plumier named other plants either after famous botanists like fuchsia (Leonhard Fuchs) and lobelia (Mathias Obel) or people who helped his career as in begonia (Michel Begon) and suriana (Josephe Donat Surian), but never after himself. I guess he took seriously the humility tenet of his order. Never fear, the botanists Joseph Pitton de Tournefort and the much more renown Carl Linnaeus named an entire genus after him: Plumeria.

Of interest to me, as a neuroscientist, is that the bark of the magnolia tree contains magnolol which is a natural ligand for the GABAA receptor.

116 - Copy

REFERENCE: Plumier, C. (1703). Nova Plantarum Americanum Genera, Paris. FULLTEXT courtesy of the Biodiversity Heritage Library

By Neuronicus, 20 April 2017