Chloroquine-induced psychosis

In the past few days, a new hot subject has gripped the attention of various media and concerned the medical doctors, as if they don’t have enough to deal with: chloroquine. That is because the President of the U.S.A., Donald Trump, endorsed chloroquine as treatment of COVID-19, a “game changer”, despite his very own director of the National Institute of Allergy and Infectious Diseases (NIAID), Dr. Anthony Fauci, very emphatically and vehemently denying that the promise of (hydroxy)chloroquine is beyond anecdotal (see the White House briefing transcript here).

161 - Copy

Many medical doctors spoke out urging caution against the drug, particularly against the combination the President endorses: hydroxychloroquine + azithromycin. As I understand it, this combo can be lethal as it can lead to fatal arrhythmia.

As for the (hydroxy)cloroquine’s possibility to help treat COVID-19, the jury is still out. Far out. Meaning that there have been a few interesting observations of the drugs working in a Petri dish (Liu et al. 2020, Wang et al., 2020), but as any pharma company knows, there is a long and perilous way from Petri dishes to pharmacies. To be precise, only 1 in 5000 drugs get from pre-clinical trials to approval and it takes about 12 years for this process to be completed (Kaljevic et al., 2004). The time is so long not because red tape, as some would deplore, but because it takes time to see what it does in humans (Phase 0), what doses are safe and don’t kill you (Phase 1), does it work at all for the intended disease (Phase 2), compare it with other drugs and evaluate the long-term side effects (Phase 3) and, finally, to see the risks and benefits of this drug (Phase 4). While we could probably get rid of Phase 0 and 4 when there is such a pandemic, there is no way I would submit my family to anything that hasn’t passed phases 1, 2, and 3. And those take years. With all the money that a nation-state has, it would still take 18 months to do it semi-properly.

Luckily for all of us, chloroquine is a very old and established anti-malarial medicine, and as such we can safely dispense of Phases 0, 1, and 4, which is fine. So we can start Phase 2 with (hydroxy)chloroquine. And that is exactly what WHO and several others are doing right now. But we don’t have enough data. We haven’t done it yet. So one can hope as much as they want, but that doesn’t make it faster.

Unfortunately – and here we go to the crux of the post -, following the President’s endorsement, many started to hoard chloroquine. Particularly the rich who can afford to “convince” an MD to write them a script for it. In countries where chloroquine is sold without prescription, like Nigeria, where it is used for arthritis, people rushed to clear the pharmacies and some didn’t just stockpiled it, but they took it without reason and without knowing the dosage. And they died. [EDIT, 23 March 2020. If you think that wouldn’t ever happen in the land of the brave, think again, as the first death to irresponsible taking chloroquine just happened in the USA]. In addition, the chloroquine hoarding in US by those who can afford it (is about $200 for 50 pills) lead to lack of supply for those who really need it, like lupus or rheumatology patients.

For those who blindly hoard or take chloroquine without prescription, I have a little morsel of knowledge to impart. Remember I am not an MD; I hold a PhD in neuroscience. So I’ll tell you what my field knows about chloroquine.

Both chloroquine and hydroxychloroquine can cause severe psychosis.

That’s right. More than 7.1 % of people who took chloroquine as prophylaxis or for treatment of malaria developed “mental and neurological manifestations” (Bitta et al.,  2017). “Hydroxychloroquine was associated with the highest prevalence of mental neurological manifestations” (p. 12). The phenomenon is well-reported, actually having its own syndrome name: “chloroquine-induced psychosis”. It was observed more than 50 years ago, in 1962 (Mustakallio et al., 1962). The mechanisms are unclear, with several hypotheses being put forward, like the drugs disrupting the NMDA transmission, calcium homeostasis, vacuole exocytosis or some other mysterious immune or transport-related mechanism. Because the symptoms are so acute, so persistent and so diverse than more than one brain neurotransmitter system must be affected.

Chloroquine-induced psychosis has sudden onset, within 1-2 days of ingestion. The syndrome presents with paranoid ideation, persecutory delusions, hallucinations, fear, confusion, delirium, altered mood, personality changes, irritability, insomnia, suicidal ideation, and violence (Biswas et al., 2014, Mascolo et al., 2018). All these at moderately low or therapeutically recommended doses (Good et al., 1982). One or two pills can be lethal in toddlers (Smith & Klein-Schwartz, 2005). The symptoms persist long after the drug ingestion has stopped (Maxwell et al., 2015).

Still want to take it “just in case”?

162 chloro - Copy

P.S. A clarification: the chemical difference between hydroxychloroquine and chloroquine is only one hydroxyl group (OH). Both are antimalarial and both have been tested in vitro for COVID-19. There are slight differences between them in terms of toxicity, safety and even mechanisms, but for the intents of this post I have treated them as one drug, since both produce psychosis.

REFERENCES:

1) Biswas PS, Sen D, & Majumdar R. (2014, Epub 28 Nov 2013). Psychosis following chloroquine ingestion: a 10-year comparative study from a malaria-hyperendemic district of India. General Hospital Psychiatry, 36(2): 181–186. doi: 10.1016/j.genhosppsych.2013.07.012, PMID: 24290896 ARTICLE

2) Bitta MA, Kariuki SM, Mwita C, Gwer S, Mwai L, & Newton CRJC (2 Jun 2017). Antimalarial drugs and the prevalence of mental and neurological manifestations: A systematic review and meta-analysis. Version 2. Wellcome Open Research, 2(13): 1-20. PMCID: PMC5473418, PMID: 28630942, doi: 10.12688/wellcomeopenres.10658.2 ARTICLE|FREE FULLTEXT PDF

4) Good MI & Shader RI. Lethality and behavioral side effects of chloroquine (1982). Journal of Clinical Psychopharmacology, 2(1): 40–47. doi: 10.1097/00004714-198202000-00005, PMID: 7040501. ARTICLE

3) Kraljevic S, Stambrook PJ, & Pavelic K (Sep 2004). Accelerating drug discovery. EMBO Reports, 5(9): 837–842. doi: 10.1038/sj.embor.7400236, PMID: 15470377, PMCID: PMC1299137. ARTICLE| FREE FULLTEXT PDF

4) Mascolo A, Berrino PM, Gareri P, Castagna A, Capuano A, Manzo C, & Berrino L. (Oct 2018, Epub 9 Jun 2018). Neuropsychiatric clinical manifestations in elderly patients treated with hydroxychloroquine: a review article. Inflammopharmacology, 26(5): 1141-1149. doi: 10.1007/s10787-018-0498-5, PMID: 29948492. ARTICLE

5) Maxwell NM, Nevin RL, Stahl S, Block J, Shugarts S, Wu AH, Dominy S, Solano-Blanco MA, Kappelman-Culver S, Lee-Messer C, Maldonado J, & Maxwell AJ (Jun 2015, Epub 9 Apr 2015). Prolonged neuropsychiatric effects following management of chloroquine intoxication with psychotropic polypharmacy. Clinical Case Reports, 3(6): 379-87. doi: 10.1002/ccr3.238, PMID: 26185633. ARTICLE | FREE FULLTEXT PDF

6) Mustakallio KK, Putkonen T, & Pihkanen TA (1962 Dec 29). Chloroquine psychosis? Lancet, 2(7270): 1387-1388. doi: 10.1016/s0140-6736(62)91067-x, PMID: 13936884. ARTICLE

7) Smith ER & Klein-Schwartz WJ (May 2005). Are 1-2 dangerous? Chloroquine and hydroxychloroquine exposure in toddlers. The Journal of Emergency Medicine, 28(4): 437-443. doi: 10.1016/j.jemermed.2004.12.011, PMID: 15837026. ARTICLE

Studies about chloroquine and hydoxychloroquine on SARS-Cov2 in vitro:

  • Gautret P, Lagier J-C, Parola P, Hoang VT, Meddeb L, Mailhe M, Doudier B, Courjon J, Giordanengo V, Esteves Vieira V, Tissot Dupont H,Colson SEP, Chabriere E, La Scola B, Rolain J-M, Brouqui P,  Raoult D. (20 March 2020). Hydroxychloroquine and azithromycin as a treatment of COVID-19: results of an open-label non-randomized clinical trial. International Journal of Antimicrobial Agents, PII:S0924-8579(20)30099-6, https://doi.org/10.1016/j.ijantimicag.2020.105949. ARTICLE | FREE FULLTEXT PDF

These studies are also not peer reviewed or at the very least not properly peer reviewed. I say that so as to take them with a grain of salt. Not to criticize in the slightest. Because I do commend the speed with which these were done and published given the pandemic. Bravo to all the authors involved (except maybe the last one f it proves to be fraudulent). And also a thumbs up to the journals which made the data freely available in record time. Unfortunately, from these papers to a treatment we still have a long way to go.

By Neuronicus, 22 March 2020

Education raises intelligence

Intelligence is a dubious concept in psychology and biology because it is difficult to define. In any science, something has a workable definition when it is described by unique testable operations or observations. But “intelligence” had eluded that workable definition, having gone through multiple transformations in the past hundred years or so, perhaps more than any other psychological construct (except “mind”). Despite Binet’s first claim more than a century ago that there is such a thing as IQ and he has a way to test for it, many psychologists and, to a lesser extent, neuroscientists are still trying to figure out what it is. Neuroscientists to a lesser extent because once the field as a whole could not agree upon a good definition, it moved on to the things that they can agree upon, i.e. executive functions.

Of course, I generalize trends to entire disciplines and I shouldn’t; not all psychology has a problem with operationalizations and replicability, just as not all neuroscientists are paragons of clarity and good science. In fact, the intelligence research seems to be rather vibrant, judging by the publications number. Who knows, maybe the psychologists have reached a consensus about what the thing is. I haven’t truly kept up with the IQ research, partly because I think the tests used for assessing it are flawed (therefore you don’t know what exactly you are measuring) and tailored for a small segment of the population (Western society, culturally embedded, English language conceptualizations etc.) and partly because the circularity of definitions (e.g. How do I know you are highly intelligent? You scored well at IQ tests. What is IQ? What the IQ tests measure).

But the final nail in the coffin of intelligence research for me was a very popular definition of Legg & Hutter in 2007: intelligence is “the ability to achieve goals”. So the poor, sick, and unlucky are just dumb? I find this definition incredibly insulting to the sheer diversity within the human species. Also, this definition is blatantly discriminatory, particularly towards the poor, whose lack of options, access to good education or to a plain healthy meal puts a serious brake on goal achievement. Alternately, there are people who want for nothing, having been born in opulence and fame but whose intellectual prowess seems to be lacking, to put it mildly, and owe their “goal achievement” to an accident of birth or circumstance. The fact that this definition is so accepted for human research soured me on the entire field. But I’m hopeful that the researchers will abandon this definition more suited for computer programs than for human beings; after all, paradigmatic shifts happen all the time.

In contrast, executive functions are more clearly defined. The one I like the most is that given by Banich (2009): “the set of abilities required to effortfully guide behavior toward a goal”. Not to achieve a goal, but to work toward a goal. With effort. Big difference.

So what are those abilities? As I said in the previous post, there are three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Now I might have left you with the impression that intelligence = executive functioning and that wouldn’t be true. There is a clear correspondence between executive functioning and intelligence, but it is not a perfect correspondence and many a paper (and a book or two) have been written to parse out what is which. For me, the most compelling argument that executive functions and whatever it is that the IQ tests measure are at least partly distinct is that brain lesions that affect one may not affect the other. It is beyond the scope of this blogpost to analyze the differences and similarities between intelligence and executive functions. But to clear up just a bit of the confusion I will say this broad statement: executive functions are the foundation of intelligence.

There is another qualm I have with the psychological research into intelligence: a big number of psychologists believe intelligence is a fixed value. In other words, you are born with a certain amount of it and that’s it. It may vary a bit, depending on your life experiences, either increasing or decreasing the IQ, but by and large you’re in the same ball-park number. In contrast, most neuroscientists believe all executive functions can be drastically improved with training. All of them.

After this much semi-coherent rambling, here is the actual crux of the post: intelligence can be trained too. Or I should say the IQ can be raised with training. Ritchie & Tucker-Drob (2018) performed a meta-analysis looking at over 600,000 healthy participants’ IQ and their education. They confirmed a previously known observation that people who score higher at IQ tests complete more years of education. But why? Is it because highly intelligent people like to learn or because longer education increases IQ? After carefully and statistically analyzing 42 studies on the subject, the authors conclude that the more educated you are, the more intelligent you become. How much more? About 1 to 5 IQ points per 1 additional year of education, to be precise. Moreover, this effect persists for a lifetime; the gain in intelligence does not diminish with the passage of time or after exiting school.

This is a good paper, its conclusions are statistically robust and consistent. Anybody can check it out as this article is an open access paper, meaning that not only the text but its entire raw data, methods, everything about it is free for everybody.

155 education and iq

For me, the conclusion is inescapable: if you think that we, as a society, or you, as an individual, would benefit from having more intelligent people around you, then you should support free access to good education. Not exactly where you thought I was going with this, eh ;)?

REFERENCE: Ritchie SJ & Tucker-Drob EM. (Aug, 2018, Epub 18 Jun 2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8):1358-1369. PMID: 29911926, PMCID: PMC6088505, DOI: 10.1177/0956797618774253. ARTICLE | FREE FULLTEXT PDF | SUPPLEMENTAL DATA  | Data, codebooks, scripts (Mplus and R), outputs

Nota bene: I’d been asked what that “1 additional year” of education means. Is it with every year of education you gain up to 5 IQ points? No, not quite. Assuming I started as normal IQ, then I’d be… 26 years of education (not counting postdoc) multiplied by let’s say 3 IQ points, makes me 178. Not bad, not bad at all. :))). No, what the authors mean is that they had access to, among other datasets, a huge cohort dataset from Norway from the moment when they increased the compulsory education by 2 years. So the researchers could look at the IQ tests of the people before and after the policy change, which were administered to all males at the same age when they entered compulsory military service. They saw the increase in 1 to 5 IQ points per each extra 1 year of education.

By Neuronicus, 14 July 2019

High fructose corn syrup IS bad for you

Because I cannot leave controversial things well enough alone – at least not when I know there shouldn’t be any controversy – my ears caught up with my tongue yesterday when the latter sputtered: “There is strong evidence for eliminating sugar from commonly used food products like bread, cereal, cans, drinks, and so on, particularly against that awful high fructose corn syrup”. “Yeah? You “researched” that up, haven’t you? Google is your bosom friend, ain’t it?” was the swift reply. Well, if you get rid of the ultra-emphatic air-quotes flanking the word ‘researched’ and replace ‘Google’ with ‘Pubmed’, then, yes, I did researched it and yes, Pubmed is my bosom friend.

Initially, I wanted to just give you all a list with peer-reviewed papers that found causal and/or correlational links between high fructose corn syrup (HFCS) and weight gain, obesity, type 2 diabetes, cardiovascular disease, fatty liver disease, metabolic and endocrine anomalies and so on. But there are way too many of them; there are over 500 papers on the subject in Pubmed only. And most of them did find that HFCS does nasty stuff to you, look for yourselves here. Then I thought to feature a paper showing that HFCS is differently metabolized than the fructose from fruits, because I keep hearing that lie perpetrated by the sugar and corn industries that “sugar is sugar” (no, it’s not! Demonstrably so!), but I doubt my yesterday’s interlocutor would care about liver’s enzymatic activity and other chemical processes with lots of acronyms. So, finally, I decided to feature a straight forward, no-nonsense paper, published recently, done at a top tier university, with human subjects, so I won’t hear any squabbles.

Price et al. (2018) studied 49 healthy subjects aged age 18–40 yr, of normal and stable body weight, and free from confounding medications or drugs, whose physical activity and energy-balanced meals were closely monitored. During the study, the subjects’ food and drink intake as well as their timing were rigorously controlled. The researchers varied only the beverages between groups, in such a way that one group received a drink sweetened with HFCS-55 (55% fructose, 45% glucose, as the one used in commercially available drinks) with every controlled meal, whereas the other group received an identical drink in size (adjusted for their energy requirements in such a way that it provided the same 25% of it), but sweetened with aspartame. The study lasted two weeks. No other beverage was allowed, including fruit juice. Urine samples were collected daily and blood samples 4 times per day.

There was a body weight increase of 810 grams (1.8 lb) in subjects consuming HFCS-sweetened beverages for 2 weeks when compared with aspartame controls. The researches also found differences in the levels of a whole host of acronyms (ppTG, ApoCIII, ApoE, OEA, DHEA, DHG, if you must know) involved in a variety of nasty things, like obesity, fatty liver disease, atherosclerosis, cardiovascular disease, stroke, diabetes, even Alzheimer’s.

This study is the third part of a larger NIH-funded study which investigates the metabolic effects of consuming sugar-sweetened beverages in about 200 participants over 5 years, registered at clinicaltrials.gov as NCT01103921. The first part (Stanhope et al., 2009) reported that consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans” (title), and the second part (Stanhope et al., 2015) found that “consuming beverages containing 10%, 17.5%, or 25% of energy requirements from HFCS produced dose-dependent increases in circulating lipid/lipoprotein risk factors for cardiovascular disease and uric acid within 2 weeks” (Abstract). They also found a dose-dependant increase in body weight, but in those subjects the results were not statistically significant (p = 0.09) after correcting for multiple comparisons. But I’ll bet that if/when the authors will publish all the data in one paper at the end of clinical trials they will have more statistical power and the trend in weight gain more obvious, as in the present paper.  Besides, it looks like there may be more than three parts to this study anyway.

The adverse effects of a high sugar diet, particularly in HFCS, are known to so many researchers in the field that they have been actually compiled in a name: the “American Lifestyle-Induced Obesity Syndrome model, which included consumption of a high-fructose corn syrup in amounts relevant to that consumed by some Americans” (Basaranoglu et al., 2013). It doesn’t refer only to increases in body weight, but also type 2 diabetes, cardiovascular disease, hypertriglyceridemia, fatty liver disease, atherosclerosis, gout, etc.

The truly sad part is that avoiding added sugars in diets in USA is impossible unless you do all – and I mean all – your cooking home, including canning, jamming, bread-making, condiment-making and so on, not just “Oh, I’ll cook some chicken or ham tonight” because in that case you end up using canned tomato sauce (which has added sugar), bread crumbs (which have added sugar), or ham (which has added sugar), salad dressing (which has sugar) and so on. Go on, check your kitchen and see how many ingredients have sugar in them, including any meat products short of raw meat. If you never read the backs of the bottles, cans, or packages, oh my, are you in for a big surprise if you live in USA…

There are lot more studies out there on the subject, as I said, of various levels of reading difficulty. This paper is not easy to read for someone outside the field, that’s for sure. But the main gist of it is in the abstract, for all to see.

150 hfcs - Copy

P.S. 1. Please don’t get me wrong: I am not against sugar in desserts, let it be clear. Nobody makes a more mean sweetalicious chocolate cake or carbolicious blueberry muffin than me, as I have been reassured many times. But I am against sugar in everything. You know I haven’t found in any store, including high-end and really high-end stores a single box of cereal of any kind without sugar? Just for fun, I’d like to be a daredevil and try it once. But there ain’t. Not in USA, anyway. I did find them in EU though. But I cannot keep flying over the Atlantic in the already crammed at premium luggage space unsweetened corn flakes from Europe which are probably made locally, incidentally and ironically, with good old American corn.

P.S. 2 I am not so naive, blind, or zealous to overlook the studies that did not find any deleterious effects of HFCS consumption. Actually, I was on the fence about HFCS until about 10 years ago when the majority of papers (now overwhelming majority) was showing that HFCS consumption not only increases weight gain, but it can also lead to more serious problems like the ones mentioned above. Or the few papers that say all added sugar is bad, but HFCS doesn’t stand out from the other sugars when it comes to disease or weight gain. But, like with most scientific things, the majority has it its way and I bow to it democratically until the new paradigm shift. Besides, the exposés of Kearns et al. (2016a, b, 2017) showing in detail and with serious documentation how the sugar industry paid prominent researchers for the past 50 years to hide the deleterious effects of added sugar (including cancer!) further cemented my opinion about added sugar in foods, particularly HFCS.

References:

  1. Price CA, Argueta DA, Medici V, Bremer AA, Lee V, Nunez MV, Chen GX, Keim NL, Havel PJ, Stanhope KL, & DiPatrizio NV (1 Aug 2018, Epub 10 Apr 2018). Plasma fatty acid ethanolamides are associated with postprandial triglycerides, ApoCIII, and ApoE in humans consuming a high-fructose corn syrup-sweetened beverage. American Journal of Physiology. Endocrinology and Metabolism, 315(2): E141-E149. PMID: 29634315, PMCID: PMC6335011 [Available on 2019-08-01], DOI: 10.1152/ajpendo.00406.2017. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Medici V2, Bremer AA2, Lee V2, Lam HD2, Nunez MV2, Chen GX2, Keim NL2, Havel PJ (Jun 2015, Epub 22 Apr 2015). A dose-response study of consuming high-fructose corn syrup-sweetened beverages on lipid/lipoprotein risk factors for cardiovascular disease in young adults. The American Journal of Clinical Nutrition, 101(6):1144-54. PMID: 25904601, PMCID: PMC4441807, DOI: 10.3945/ajcn.114.100461. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Schwarz JM, Keim NL, Griffen SC, Bremer AA, Graham JL, Hatcher B, Cox CL, Dyachenko A, Zhang W, McGahan JP, Seibert A, Krauss RM, Chiu S, Schaefer EJ, Ai M, Otokozawa S, Nakajima K, Nakano T, Beysen C, Hellerstein MK, Berglund L, Havel PJ (May 2009, Epub 20 Apr 2009). Consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans. The Journal of Clinical Investigation,119(5):1322-34. PMID: 19381015, PMCID: PMC2673878, DOI:10.1172/JCI37385. ARTICLE | FREE FULTEXT PDF

(Very) Selected Bibliography:

Bocarsly ME, Powell ES, Avena NM, Hoebel BG. (Nov 2010, Epub 26 Feb 2010). High-fructose corn syrup causes characteristics of obesity in rats: increased body weight, body fat and triglyceride levels. Pharmacology, Biochemistry, and Behavior, 97(1):101-6. PMID: 20219526, PMCID: PMC3522469, DOI: 10.1016/j.pbb.2010.02.012. ARTICLE | FREE FULLTEXT PDF

Kearns CE, Apollonio D, Glantz SA (21 Nov 2017). Sugar industry sponsorship of germ-free rodent studies linking sucrose to hyperlipidemia and cancer: An historical analysis of internal documents. PLoS Biology, 15(11):e2003460. PMID: 29161267, PMCID: PMC5697802, DOI: 10.1371/journal.pbio.2003460. ARTICLE | FREE FULTEXT PDF

Kearns CE, Schmidt LA, Glantz SA (1 Nov 2016). Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents. JAMA Internal Medicine, 176(11):1680-1685. PMID: 27617709, PMCID: PMC5099084, DOI: 10.1001/jamainternmed.2016.5394. ARTICLE | FREE FULTEXT PDF

Mandrioli D, Kearns CE, Bero LA (8 Sep 2016). Relationship between Research Outcomes and Risk of Bias, Study Sponsorship, and Author Financial Conflicts of Interest in Reviews of the Effects of Artificially Sweetened Beverages on Weight Outcomes: A Systematic Review of Reviews. PLoS One, 11(9):e0162198.PMID: 27606602, PMCID: PMC5015869, DOI: 10.1371/journal.pone.0162198. ARTICLE | FREE FULTEXT PDF

By Neuronicus, 22 March 2019

Milk-producing spider

In biology, organizing living things in categories is called taxonomy. Such categories are established based on shared characteristics of the members. These characteristics were usually visual attributes. For example, a red-footed booby (it’s a bird, silly!) is obviously different than a blue-footed booby, so we put them in different categories, which Aristotle called in Greek something like species.

Biological taxonomy is very useful, not only to provide countless hours of fight (both verbal and physical!) for biologists, but to inform us of all sorts of unexpected relationships between living things. These relationships, in turn, can give us insights into our own evolution, but also the evolution of things inimical to us, like diseases, and, perhaps, their cure. Also extremely important, it allows scientists from all over the world to have a common language, thus maximizing information sharing and minimizing misunderstandings.

148-who Am I - Copy

All well and good. And it was all well and good since Carl Linnaeus introduced his famous taxonomy system in the 18th Century, the one we still use today with species, genus, family, order, and kingdom. Then we figured out how to map the DNAs of things around us and this information threw out the window a lot of Linnean classifications. Because it turns out that some things that look similar are not genetically similar; likewise, some living things that we thought are very different from one another, turned out that, genetically speaking, they are not so different.

You will say, then, alright, out with visual taxonomy, in with phylogenetic taxonomy. This would be absolutely peachy for a minority of organisms of the planet, like animals and plants, but a nightmare in the more promiscuous organisms who have no problem swapping bits of DNA back and forth, like some bacteria, so you don’t know anymore who’s who. And don’t even get me started on the viruses which we are still trying to figure out whether or not they are alive in the first place.

When I grew up there were 5 regna or kingdoms in our tree of life – Monera, Protista, Fungi, Plantae, Animalia – each with very distinctive characteristics. Likewise, the class Mammalia from the Animal Kingdom was characterized by the females feeding their offspring with milk from mammary glands. Period. No confusion. But now I have no idea – nor do many other biologists, rest assured – how many domains or kingdoms or empires we have, nor even what the definition of a species is anymore!

As if that’s not enough, even those Linnean characteristics that we thought set in stone are amenable to change. Which is good, shows the progress of science. But I didn’t think that something like the definition of mammal would change. Mammals are organisms whose females feed their offspring with milk from mammary glands, as I vouchsafed above. Pretty straightforward. And not spiders. Let me be clear on this: spiders did not feature in my – or anyone’s! – definition of mammals.

Until Chen et al. (2018) published their weird article a couple of weeks ago. The abstract is free for all to see and states that the females of a jumping spider species feed their young with milk secreted by their body until the age of subadulthood. Mothers continue to offer parental care past the maturity threshold. The milk is necessary for the spiderlings because without it they die. That’s all.

I read the whole paper since it was only 4 pages of it and here are some more details about their discovery. The species of spider they looked at is Toxeus magnus, a jumping spider that looks like an ant. The mother produces milk from her epigastric furrow and deposits it on the nest floor and walls from where the spiderlings ingest it (0-7 days). After the first week of this, the spiderlings suck the milk direct from the mother’s body and continue to do so for the next two weeks (7-20 days) when they start leaving the nest and forage for themselves. But they return and for the next period (20-40 days) they get their food both from the mother’s milk and from independent foraging. Spiderlings get weaned by day 40, but they still come home to sleep at night. At day 52 they are officially considered adults. Interestingly, “although the mother apparently treated all juveniles the same, only daughters were allowed to return to the breeding nest after sexual maturity. Adult sons were attacked if they tried to return. This may reduce inbreeding depression, which is considered to be a major selective agent for the evolution of mating systems (p. 1053).”

During all this time, including during the emergence into adulthood of the offsprings, the mother also supplied house maintenance, carrying out her children’s exuviae (shed exoskeletons) and repairing the nest.

The authors then did a series of experiments to see what role does the nursing and other maternal care at different stages play in the fitness and survival of the offsprings. Blocking the mother’s milk production with correction fluid immediately after hatching killed all the spiderlings, showing that they are completely dependent on the mother’s milk. Removing the mother after the spiderlings start foraging (day 20) drastically reduces survivorship and body size, showing that mother’s care is essential for her offsprings’ success. Moreover, the mother taking care of the nest and keeping it clean reduced the occurrence of parasite infections on the juveniles.

The authors analyzed the milk and it’s highly nutritious: “spider milk total sugar content was 2.0 mg/ml, total fat 5.3 mg/ml, and total protein 123.9 mg/ml, with the protein content around four times that of cow’s milk (p. 1053)”.

Speechless I am. Good for the spider, I guess. Spider milk will have exorbitant costs (Apparently, a slight finger pressure on the milk-secreting region makes the mother spider secret the milk, not at all unlike the human mother). Spiderlings die without the mother’s milk. Responsible farming? Spider milker qualifications? I’m gonna lay down, I got a headache.

148 spider milk - Copy

REFERENCE: Chen Z, Corlett RT, Jiao X, Liu SJ, Charles-Dominique T, Zhang S, Li H, Lai R, Long C, & Quan RC (30 Nov. 2018). Prolonged milk provisioning in a jumping spider. Science, 362(6418):1052-1055. PMID: 30498127, DOI: 10.1126/science.aat3692. ARTICLE | Supplemental info (check out the videos)

By Neuronicus, 13 December 2018

Pic of the day: Dopamine from a non-dopamine place

147 lc da ppvn - Copy

Reference: Beas BS, Wright BJ, Skirzewski M, Leng Y, Hyun JH, Koita O, Ringelberg N, Kwon HB, Buonanno A, & Penzo MA (Jul 2018, Epub 18 Jun 2018). The locus coeruleus drives disinhibition in the midline thalamus via a dopaminergic mechanism. Nature Neuroscience,21(7):963-973. PMID: 29915192, PMCID: PMC6035776 [Available on 2018-12-18], DOI:10.1038/s41593-018-0167-4. ARTICLE

Pooping Legos

Yeah, alright… uhm… how exactly should I approach this paper? I’d better just dive into it (oh boy! I shouldn’t have said that).

The authors of this paper were adult health-care professionals in the pediatric field. These three males and three females were also the participants in the study. They kept a poop-diary noting the frequency and volume of bowel movements (Did they poop directly on a scale or did they have to scoop it out in a bag?). The researchers/subjects developed a Stool Hardness and Transit (SHAT) metric to… um.. “standardize bowel habit between participants” (p. 1). In other words, to put the participants’ bowel movements on the same level (please, no need to visualize, I am still stuck at the poop-on-a-scale phase), the authors looked – quite literally – at the consistency of the poop and gave it a rating. I wonder if they checked for inter-rater reliability… meaning did they check each other’s poops?…

Then the researchers/subjects ingested a Lego figurine head, on purpose, somewhere between 7 and 9 a.m. Then they timed how much time it took to exit. The FART score (Found and Retrieved Time) was 1.71 days. “There was some evidence that females may be more accomplished at searching through their stools than males, but this could not be statistically validated” due to the small sample size, if not the poops’. It took 1 to 3 stools for the object to be found, although poor subject B had to search through his 13 stools over a period of 2 weeks to no avail. I suppose that’s what you get if you miss the target, even if you have a PhD.

The pre-SHAT and SHAT score of the participants did not differ, suggesting that the Lego head did not alter the poop consistency (I got nothin’ here; the authors’ acronyms are sufficient scatological allusion). From a statistical standpoint, the one who couldn’t find his head in his poop (!) should not have been included in the pre-SHAT score group. Serves him right.

I wonder how they searched through the poop… A knife? A sieve? A squashing spatula? Gloved hands? Were they floaters or did the poop sink at the base of the toilet? Then how was it retrieved? Did the researchers have to poop in a bucket so no loss of data should occur? Upon direct experimentation 1 minute ago, I vouchsafe that a Lego head is completely buoyant. Would that affect the floatability of the stool in question? That’s what I’d like to know. Although, to be fair, no, that’s not what I want to know; what I desire the most is a far larger sample size so some serious stats can be conducted. With different Lego parts. So they can poop bricks. Or, as suggested by the authors, “one study arm including swallowing a Lego figurine holding a coin” (p. 3) so one can draw parallels between Lego ingestion and coin ingestion research, the latter being, apparently, far more prevalent. So many questions that still need to be answered! More research is needed, if only grants would be so… regular as the raw data.

The paper, albeit short and to the point, fills a gap in our scatological knowledge database (Oh dear Lord, stop me!). The aim of the paper was to show that ingested objects by children tend to pass without a problem. Also of value, the paper asks pediatricians to counsel the parents to not search for the object in the faeces to prove object retrieval because “if an experienced clinician with a PhD is unable to adequately find objects in their own stool, it seems clear that we should not be expecting parents to do so” (p. 3). Seems fair.

146 lego poop - Copy

REFERENCE: Tagg, A., Roland, D., Leo, G. S., Knight, K., Goldstein, H., Davis, T. and Don’t Forget The Bubbles (22 November 2018). Everything is awesome: Don’t forget the Lego. Journal of Paediatrics and Child Health, doi: 10.1111/jpc.14309. ARTICLE

By Neuronicus, 27 November 2017

Apathy

Le Heron et al. (2018) defines apathy as a marked reduction in goal-directed behavior. But in order to move, one must be motivated to do so. Therefore, a generalized form of impaired motivation also hallmarks apathy.

The authors compiled for us a nice mini-review combing through the literature of motivation in order to identify, if possible, the neurobiological mechanism(s) of apathy. First, they go very succinctly though the neuroscience of motivated behavior. Very succinctly, because there are literally hundreds of thousands of worthwhile pages out there on this subject. Although there are several other models proposed out-there, the authors’ new model on motivation includes the usual suspects (dopamine, striatum, prefrontal cortex, anterior cingulate cortex) and you can see it in the Fig. 1.

145 apathy 1 - Copy
Fig. 1 from Le Heron et al. (2018). The red underlining is mine because I really liked how well and succinctly the authors put a universal truth about the brain: “A single brain region likely contributes to more than one process, but with specialisation”. © Author(s) (or their employer(s)) 2018.

After this intro, the authors go on to showcasing findings from the effort-based decision-making field, which suggest that the dopamine-producing neurons from ventral tegmental area (VTA) are fundamental in choosing an action that requires high-effort for high-reward versus a low-effort for low-reward. Contrary to what Wikipedia tells you, a reduction, not an increase, in mesolimbic dopamine is associated with apathy, i.e. preferring a low-effort for low-reward activity.

Next, the authors focus on why are the apathetic… apathetic? Basically, they asked the question: “For the apathetic, is the reward too little or is the effort too high?” By looking at some cleverly designed experiments destined to parse out sensitivity to reward versus sensitivity to effort costs, the authors conclude that the apathetics are indeed sensitive to the reward, meaning they don’t find the rewards good enough for them to move.  Therefore, the answer is the reward is too little.

In a nutshell, apathetic people think “It’s not worth it, so I’m not willing to put in the effort to get it”. But if somehow they are made to judge the reward as good enough, to think “it’s worth it”, they are willing to work their darndest to get it, like everybody else.

The application of this is that in order to get people off the couch and do stuff you have to present them a reward that they consider worth moving for, in other words to motivate them. To which any practicing psychologist or counselor would say: “Duh! We’ve been saying that for ages. Glad that neuroscience finally caught up”.  Because it’s easy to say people need to get motivated, but much much harder to figure out how.

This was a difficult write for me and even I recognize the quality of this blogpost as crappy. That’s because, more or less, this paper is within my narrow specialization field. There are points where I disagree with the authors (some definitions of terms), there are points where things are way more nuanced than presented (dopamine findings in reward), and finally there are personal preferences (the interpretation of data from Parkinson’s disease studies). Plus, Salamone (the second-to-last author) is a big name in dopamine research, meaning I’m familiar with his past 20 years or so worth of publications, so I can infer certain salient implications (one dopamine hypothesis is about saliency, get it?).

It’s an interesting paper, but it’s definitely written for the specialist. Hurray (or boo, whatever would be your preference) for another model of dopamine function(s).

REFERENCE: Le Heron C, Holroyd CB, Salamone J, & Husain M (26 Oct 2018, Epub ahead of print). Brain mechanisms underlying apathy. Journal of Neurology, Neurosurgery & Psychiatry. pii: jnnp-2018-318265. doi: 10.1136/jnnp-2018-318265. PMID: 30366958 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 24 November 2018

No licorice for you

I never liked licorice. And that turns out to be a good thing. Given that Halloween just happened yesterday and licorice candy is still sold in USA, I remembered the FDA’s warning against consumption of licorice from a year ago.

So I dug out the data supporting this recommendation. It’s a review paper published 6 years ago by Omar et al. (2012) meant to raise awareness of the risks of licorice consumption and to urge FDA to take regulatory steps.

The active ingredient in licorice is glycyrrhizic acid. This is hydrolyzed to glycyrrhetic acid by intestinal bacteria possessing a specialized ß-glucuronidase. Glycyrrhetic acid, in turn, inhibits 11-ß-hydroxysteroid dehydrogenase (11-ß-HSD) which results in cortisol activity increase, which binds to the mineralcorticoid receptors in the kidneys, leading to low potassium levels (called hypokalemia). Additionally, licorice components can also bind directly to the mineralcorticoid receptors.

Eating 2 ounces of black licorice a day for at least two weeks (which is roughly equivalent to 2 mg/kg/day of pure glycyrrhizinic acid) is enough to produce disturbances in the following systems:

  • cardiovascular (hypertension, arrhythmias, heart failure, edemas)
  • neurological (stroke, myoclonia, ocular deficits, Carpal tunnel, muscle weakness)
  • renal (low potassium, myoglobinuria, alkalosis)
  • and others

144 licorice - Copy

Although everybody is affected by licorice consumption, the most vulnerable populations are those over 40 years old, those who don’t poop every day, or are hypertensive, anorexic or of the female persuasion.

Unfortunately, even if one doesn’t enjoy licorice candy, they still can consume it as it is used as a sweetener or flavoring agent in many foods, like sodas and snacks. It is also used in naturopathic crap, herbal remedies, and other dangerous scams of that ilk. So beware of licorice and read the label, assuming the makers label it.

144 licorice products - Copy
Licorice products (Images: PD, Collage: Neuronicus)

REFERENCE: Omar HR, Komarova I, El-Ghonemi M, Fathy A, Rashad R, Abdelmalak HD, Yerramadha MR, Ali Y, Helal E, & Camporesi EM. (Aug 2012). Licorice abuse: time to send a warning message. Therapeutic Advances in Endocrinology and Metabolism, 3(4):125-38. PMID: 23185686, PMCID: PMC3498851, DOI: 10.1177/2042018812454322. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 November 2018

Raising a child costs 13 million calories

That’s right. Somebody actually did the math on that. Kaplan in 1994, to be exact.

The anthropologist and his colleague, Kate Kopischke, looked at how three semi-isolated populations from South America live. Between September 1988 and May 1989, the researchers analyzed several variables meant to shed light mainly on fertility rate and wealth flow. They measured the amount of time spent taking care of children. They estimated the best time to have a second child. They weighed the food of these communities. And then they estimated the caloric intake and expenditure per day per individual.

Human children are unable to provision for themselves until about the age of 18. So most of their caloric intake requirements are provided by their parents. Long story (39 pages) short, Kaplan (1994) concluded that a child relies on 13 million calories provided by the adults. Granted, these are mostly hunter-gatherer communities, so the number may be a bit off from your average American child. The question is: which way? Do American kids “cost” more or less?

143 13 mil calories - Copy

P.S. I was reading a paper, Kohl (2018), in the last week’s issue of Science that quoted this number, 13 million. When I went to the cited source, Hrdy (2016), that one was citing yet another one, the above-mentioned Kaplan (1994) paper. Luckily for Kohl, Hrdy cited Kaplan correctly. But I must tell you from my own experience, half of the time when people cite other people citing other people citing original research, they are wrong. Meaning that somewhere in the chain somebody got it wrong or twisted the original research finding for their purposes. Half of the time, I tell you. People don’t go for the original material because it can be a hassle to dig it out, or it’s hard to read, or because citing a more recent paper looks better in the review process. But that comes to the risk of being flat wrong. The moral: always, always, go for the source material.

P.P.S. To be clear, I’m not accusing Kohl of not reading Kaplan because accusing an academic of citing without reading or being unfamiliar with seminal research in their field (that is, seminal in somebody else’s opinion) is a tremendous insult not be wielded lightly by bystanders but to be viciously used only for in-house fights on a regular basis. No. I’m saying that Kohl got that number second-hand and that’s frowned upon. The moral: always, always, go for the source material. I can’t emphasize this enough.

P.P.P..S. Ah, forget it. P.S. 3. Upon reading my blog, my significant other’s first question was: “Well, how much is that in potatoes?” I had to do the math on a Post-It and the answer is: 50,288 large skinless potatoes, boiled without salt. That’s 15,116 Kg of potatoes, more than 15 metric tones. Here you go. Happy now? Why are we talking about potatoes?! No, I don’t know how many potatoes would fit into a house. Jeez!

REFERENCE: Kaplan, H. (Dec. 1994). Evolutionary and Wealth Flows Theories of Fertility: Empirical Tests and New Models. Population and Development Review, Vol. 20, No. 4, pp. 753-791. DOI: 10.2307/2137661. ARTICLE

By Neuronicus, 22 October 2018

Locus Coeruleus in mania

From all the mental disorders, bipolar disorder, a.k.a. manic-depressive disorder, has the highest risk for suicide attempt and completion. If the thought of suicide crosses your mind, stop reading this, it’s not that important; what’s important is for you to call the toll-free National Suicide Prevention Lifeline at 1-800-273-TALK (8255).

The bipolar disorder is defined by alternating manic episodes of elevated mood, activity, excitation, and energy with episodes of depression characterized by feelings of deep sadness, hopelessness, worthlessness, low energy, and decreased activity. It is also a more common disease than people usually expect, affecting about 1% or more of the world population. That means almost 80 million people! Therefore, it’s imperative to find out what’s causing it so we can treat it.

Unfortunately, the disease is very complex, with many brain parts, brain chemicals, and genes involved in its pathology. We don’t even fully comprehend how the best medication we have to lower the risk of suicide, lithium, works. The good news is the neuroscientists haven’t given up, they are grinding at it, and with every study we get closer to subduing this monster.

One such study freshly published last month, Cao et al. (2018), looked at a semi-obscure membrane protein, ErbB4. The protein is a tyrosine kinase receptor, which is a bit unfortunate because this means is involved in ubiquitous cellular signaling, making it harder to find its exact role in a specific disorder. Indeed, ErbB4 has been found to play a role in neural development, schizophrenia, epilepsy, even ALS (Lou Gehrig’s disease).

Given that ErbB4 is found in some neurons that are involved in bipolar and mutations in its gene are also found in some people with bipolar, Cao et al. (2018) sought to find out more about it.

First, they produced mice that lacked the gene coding for ErbB4 in neurons from locus coeruleus, the part of the brain that produces norepinephrine out of dopamine, better known for the European audience as nor-adrenaline. The mutant mice had a lot more norepinephrine and dopamine in their brains, which correlated with mania-like behaviors. You might have noticed that the term used was ‘manic-like’ and not ‘manic’ because we don’t know for sure how the mice feel; instead, we can see how they behave and from that infer how they feel. So the researchers put the mice thorough a battery of behavioral tests and observed that the mutant mice were hyperactive, showed less anxious and depressed behaviors, and they liked their sugary drink more than their normal counterparts, which, taken together, are indices of mania.

Next, through a series of electrophysiological experiments, the scientists found that the mechanism through which the absence of ErbB4 leads to mania is making another receptor, called NMDA, in that brain region more active. When this receptor is hyperactive, it causes neurons to fire, releasing their norepinephrine. But if given lithium, the mutant mice behaved like normal mice. Correspondingly, they also had a normal-behaving NMDA receptor, which led to normal firing of the noradrenergic neurons.

So the mechanism looks like this (Jargon alert!):

No ErbB4 –> ↑ NR2B NMDAR subunit –> hyperactive NMDAR –> ↑ neuron firing –> ↑ catecholamines –> mania.

In conclusion, another piece of the bipolar puzzle has been uncovered. The next obvious step will be for the researchers to figure out a medicine that targets ErbB4 and see if it could treat bipolar disorder. Good paper!

142 erbb4 - Copy

P.S. If you’re not familiar with the journal eLife, go and check it out. The journal offers for every study a half-page summary of the findings destined for the lay audience, called eLife digest. I’ve seen this practice in other journals, but this one is generally very well written and truly for the lay audience and the non-specialist. Something of what I try to do here, minus the personal remarks and in parenthesis metacognitions that you’ll find in most of my posts. In short, the eLife digest is masterly done. As my continuous struggles on this blog show, it is tremendously difficult for a scientist to write concisely, precisely, and jargonless at the same time. But eLife is doing it. Check it out. Plus, if you care to take a look on how science is done and published, eLife publishes all the editor’s rejection notes, all the reviewers’ comments, and all the author responses for a particular paper. Reading those is truly a teaching moment.

REFERENCE: Cao SX, Zhang Y, Hu XY, Hong B, Sun P, He HY, Geng HY, Bao AM, Duan SM, Yang JM, Gao TM, Lian H, Li XM (4 Sept 2018). ErbB4 deletion in noradrenergic neurons in the locus coeruleus induces mania-like behavior via elevated catecholamines. Elife, 7. pii: e39907. doi: 10.7554/eLife.39907. PMID: 30179154 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 14 October 2018

The Global Warming IPCC 2018 Report

The Special Report on Global Warming of 1.5ºC (SR15) was published two days ago, on October 8th, 2018. The Report was written by The Intergovernmental Panel on Climate Change (IPCC), “which is the UN body for assessing the science related to climate change. It was established by the United Nations Environment Programme (UN Environment) and the World Meteorological Organization (WMO) in 1988 to provide policymakers with regular scientific assessments concerning climate change, its implications and potential future risks, as well as to put forward adaptation and mitigation strategies.” (IPCC Special Report on Global Warming of 1.5ºC, Press Release).

The Report’s findings are very bad. Its Summary for Policymakers starts with:

“Human activities are estimated to have caused approximately 1.0°C of global warming above pre-industrial levels, with a likely range of 0.8°C to 1.2°C. Global warming is likely to reach 1.5°C between 2030 and 2052 if it continues to increase at the current rate.”

That’s 12 years from now.

IPCC 2018 - Copy
Extract from the IPCC (2018), Global Warming of 1.5 ºC, Summary for Policymakers. “Observed monthly global mean surface temperature (GMST) change grey line up to 2017, from the HadCRUT4, GISTEMP, Cowtan – Way, and NOAA datasets) and estimated anthropogenic global warming (solid orange line up to 2017, with orange shading indicating assessed likely range). Orange dashed arrow and horizontal orange error bar show respectively central estimate and likely range of the time at which 1.5°C is reached if the current rate of warming continues. The grey plume on the right of  shows the likely range of warming responses, computed with a simple climate model, to a stylized pathway (hypothetical future) in which net CO2 emissions  decline in a straight line from 2020 to reach net zero in 2055 and net non – CO2 radiative forcing increases to 2030 and then declines. “

Which means that we warmed up the world by 1.0°C (1.8°F) since 1850-1900. Continuing the way we have been doing, we will add another 0.5°C (0.9°F) to the world temperature sometime between 2030 and 2052, making the total human-made global warming to 1.5°C (2.7°F).

That’s 12 years from now.

Half a degree Celsius doesn’t sound so bad until you look at the highly confident model prediction saying that gaining that extra 0.5°C (0.9°F) will result in terrible unseen before superstorms and precipitation in some regions while others will suffer prolonged droughts, along with extreme heat waves and sea level rises due to the melting of Antarctica. From a biota point of view, if we reach the 1.5°C (2.7°F) threshold, most of the coral reefs will become extinct, as well as thousands of other species (6% of insects, 8% of plants, and 4% of vertebrates).

That’s 12 years from now.

All these will end up increasing famine, homelessness, disease, inequality, poverty, and refugee numbers to unprecedented levels. Huge spending of money on infrastructure, rebuilding, help efforts, irrigation, water supplies, and so on, for those inclined to be more concerned by finances. To put it bluntly, a 1.5°C (2.7°F) increase in global warming costs us about $54 trillion.

That’s 12 years from now.

These effects will persist for centuries to millennia. To stay at the 1.5°C (2.7°F)  limit we need to reduce the carbon emissions by 50% by 2030 and achieve 0 emissions by 2050.

That’s 12 years from now.

The Report emphasizes that a 1.5°C (2.7°F)  increase is not as bad as a 2°C (3.6°F), where we will loose double of the biota, the storms will be worse, the droughts longer, and altogether a more catastrophic scenario.

Technically, we ARE ABLE to limit the warming at 1.5°C (2.7°F), If, by 2050, we rely on renewable energy, like solar and wind, to supply 70-85% of energy, we will be able to stay at the 1.5°C (2.7°F). Lower the coal use as energy source to single digits percentages. Expanding forests and implementing large CO2 capture programs would help tremendously. Drastically reduce carbon emissions by, for example, hitting polluters with crippling fines. But all this requires rapid implementation of heavy laws and regulation, which will come from a concentrated effort of our leaders.

Therefore, politically, we ARE UNABLE to limit the warming at 1.5°C (2.7°F). Instead, it’s very likely that we will warm the planet by 2°C (3.6°F) in the next decades. If we do nothing, by the end of the century the world will be even hotter, being warmed up by 3°C (5.4°F) and there are no happy scenarios then as the climate change will be beyond our control. That is, our children’s control.

141-ipcc hope - Copy

There are conspiracy theorists out there claiming that there are nefarious or hidden reasons behind this report, or that its conclusions are not credible, or that it’s not legit, or it’s bad science, or that it represents the view of a fringe group of scientists and does not reflect a scientific consensus. I would argue that people who claim such absurdities are either the ones with a hidden agenda or are plain idiots. Not ignorants, because ignorance is curable and whoever seeks to learn new things is to be admired. Not honest questioning either, because that is as necessary to science as the water to the fish. Willful ignorance, on the other hand, I call idiocy and is remarkably resistant to presentation of facts. FYI, the Report was conducted by a Panel commissioned by an organization comprising 195 countries, is authored by 91 scientists, has an additional 133 contributing authors, all these spanning 40 countries, analyzing over 6000 scientific studies. They can’t all be “in it”. Oh, and the Panel received the 2007 Nobel Peace Prize. I daresay it looks legit. The next full climate assessment will be released in 2021.

141 Climate change - Copy

REFERENCES:

  1. The Intergovernmental Panel on Climate Change (IPCC) (2018). Global Warming of 1.5 ºC, an IPCC Special report on the impacts of global warming of 1.5 ºC above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty. Retrieved 10 October 2018Website  | Covers: New York Times | Nature  | The Washington Post | The Guardian | The Economist | ABC News | Deutsche Welle | CNN | HuffPost Canada| Los Angeles Times | BBC | Time .
  2. The IPCC Summary for Policymakers PDF
  3. The IPCC Press Release PDF
  4. The 2007 Nobel Peace Prize.

By Neuronicus, 10 October 2018

The Mom Brain

Recently, I read an opinion titled When I Became A Mother, Feminism Let Me Down. The gist of it was that some (not all!) feminists, while empowering women and girls to be anything they want to be and to do anything a man or a boy does, they fail in uplifting the motherhood aspect of a woman’s life, should she choose to become a mother. In other words, even (or especially, in some cases) feminists look down on the women who chose to switch from a paid job and professional career to an unpaid stay-at-home mom career, as if being a mother is somehow beneath what a woman can be and can achieve. As if raising the next generation of humans to be rational, informed, well-behaved social actors instead of ignorant brutal egomaniacs is a trifling matter, not to be compared with the responsibilities and struggles of a CEO position.

Patriarchy notwithstanding, a woman can do anything a man can. And more. The ‘more’ refers to, naturally, motherhood. Evidently, fatherhood is also a thing. But the changes that happen in a mother’s brain and body during pregnancy, breastfeeding, and postpartum periods are significantly more profound than whatever happens to the most loving and caring and involved father.

Kim (2016) bundled some of these changes in a nice review, showing how these drastic and dramatic alterations actually have an adaptive function, preparing the mother for parenting. Equally important, some of the brain plasticity is permanent. The body might spring back into shape if the mother is young or puts into it a devilishly large amount of effort, but some brain changes are there to stay. Not all, though.

One of the most pervasive findings in motherhood studies is that hormones whose production is increased during pregnancy and postpartum, like oxytocin and dopamine, sensitize the fear circuit in the brain. During the second trimester of pregnancy and particularly during the third, expectant mothers start to be hypervigilent and hypersensitive to threats and to angry faces. A higher anxiety state is characterized, among other things, by preferentially scanning for threats and other bad stuff. Threats mean anything from the improbable tiger to the 1 in a million chance for the baby to be dropped by grandma to the slightly warmer forehead or the weirdly colored poopy diaper. The sensitization of the fear circuit, out of which the amygdala is an essential part, is adaptive because it makes the mother more likely to not miss or ignore her baby’s cry, thus attending to his or her needs. Also, attention to potential threats is conducive to a better protection of the helpless infant from real dangers. This hypersensitivity usually lasts 6 to 12 months after childbirth, but it can last lifetime in females already predisposed to anxiety or exposed to more stressful events than average.

Many new mothers worry if they will be able to love their child as they don’t feel this all-consuming love other women rave about pre- or during pregnancy. Rest assured ladies, nature has your back. And your baby’s. Because as soon as you give birth, dopamine and oxytocin flood the body and the brain and in so doing they modify the reward motivational circuit, making new mothers literally obsessed with their newborn. The method of giving birth is inconsequential, as no differences in attachment have been noted (this is from a different study). Do not mess with mother’s love! It’s hardwired.

Another change happens to the brain structures underlying social information processing, like the insula or fusiform gyrus, making mothers more adept at self-motoring, reflection, and empathy. Which is a rapid transformation, without which a mother may be less accurate in understanding the needs, mental state, and social cues of the very undeveloped ball of snot and barf that is the human infant (I said that affectionately, I promise).

In order to deal with all these internal changes and the external pressures of being a new mom the brain has to put up some coping mechanisms. (Did you know, non-parents, that for the first months of their newborn lives, the mothers who breastfeed must do so at least every 4 hours? Can you imagine how berserk with sleep deprivation you would be after 4 months without a single night of full sleep but only catnaps?). Some would be surprised to find out – not mothers, though, I’m sure – that “new mothers exhibit enhanced neural activation in the emotion regulation circuit including the anterior cingulate cortex, and the medial and lateral prefrontal cortex” (p. 50). Which means that new moms are actually better at controlling their emotions, particularly at regulating negative emotional reactions. Shocking, eh?

140 mom brain1 - Copy

Finally, it appears that very few parts of the brain are spared from this overhaul as the entire brain of the mother is first reduced in size and then it grows back, reorganized. Yeah, isn’t that weird? During pregnancy the brain shrinks, being at its lowest during childbirth and then starts to grow again, reaching its pre-pregnancy size 6 months after childbirth! And when it’s back, it’s different. The brain parts heavily involved in parenting, like the amygdala involved in the anxiety, the insula and superior temporal gyrus involved in social information processing and the anterior cingulate gyrus involved in emotional regulation, all these show increased gray matter volume. And many other brain structures that I didn’t list. One brain structure is rarely involved only in one thing so the question is (well, one of them) what else is changed about the mothers, in addition to their increased ability to parent?

I need to add a note here: the changes that Kim (2016) talks about are averaged. That means some women get changed more, some less. There is variability in plasticity, which should be a pleonasm. There is also variability in the human population, as any mother attending a school parents’ night-out can attest. Some mothers are paranoid with fear and overprotective, others are more laissez faire when it comes to eating from the floor.

But SOME changes do occur in all mothers’ brains and bodies. For example, all new mothers exhibit a heightened attention to threats and subsequent raised levels of anxiety. But when does heightened attention to threats become debilitating anxiety? Thanks to more understanding and tolerance about these changes, more and more women feel more comfortable reporting negative feelings after childbirth so that now we know that postpartum depression, which happens to 60 – 80% of mothers, is a serious matter. A serious matter that needs serious attention from both professionals and the immediate social circle of the mother, both for her sake as well as her infant’s. Don’t get me wrong, we – both males and females – still have a long way ahead of us to scientifically understand and to socially accept the mother brain, but these studies are a great start. They acknowledge what all mothers know: that they are different after childbirth than the way they were before. Now we have to figure out how are they different and what can we do to make everyone’s lives better.

Kim (2016) is an OK review, a real easy read, I recommend it to the non-specialists wholeheartedly; you just have to skip the name of the brain parts and the rest is pretty clear. It is also a very short review, which will help with reader fatigue. The caveat of that is that it doesn’t include a whole lotta studies, nor does it go in detail on the implications of what the handful cited have found, but you’ll get the gist of it. There is a vastly more thorough literature if one would include animal studies that the author, curiously, did not include. I know that a mouse is not a chimp is not a human, but all three of us are mammals, and social mammals at that. Surely, there is enough biological overlap so extrapolations are warranted, even if partially. Nevertheless, it’s a good start for those who want to know a bit about the changes motherhood does to the brain, behavior, thoughts, and feelings.

Corroborated with what I already know about the neuroscience of maternity, my favourite takeaway is this: new moms are not crazy. They can’t help most of these changes. It’s biology, you see. So go easy on new moms. Moms, also go easy on yourselves and know that, whether they want to share or not, the other moms probably go through the same stuff. The other moms are doing better than you are either. You’re not alone. And if that overactive threat circuit gives you problems, i.e. you feel overwhelmed, it’s OK to ask for help. And if you don’t get it, ask for it again and again until you do. That takes courage, that’s empowerment.

P. S. The paper doesn’t look like it’s peer-reviewed. Yes, I know the peer-reviewing publication system is flawed, I’ve been on the receiving end of it myself, but it’s been drilled into my skull that it’s important, flawed as it is, so I thought to mention it.

REFERENCE: Kim, P. (Sept. 2016). Human Maternal Brain Plasticity: Adaptation to Parenting, New Directions for Child and Adolescent Development, (153): 47–58. PMCID: PMC5667351, doi: 10.1002/cad.20168. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 28 September 2018

Pic of the day: Total amount of DNA on Earth

139 DNA amount, better font - Copy

Approximately… give or take…

REFERENCE: Landenmark HKE, Forgan DH, & Cockell CS (11 Jun 2915). An Estimate of the Total DNA in the Biosphere. PLoS Biology, 13(6): e1002168. PMCID: PMC4466264, PMID: 26066900, DOI: 10.1371/journal.pbio.1002168. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 September 2018

The FIRSTS: The cause(s) of dinosaur extinction

A few days ago, a follower of mine gave me an interesting read from The Atlantic regarding the dinosaur extinction. Like many of my generation, I was taught in school that dinosaurs died because an asteroid hit the Earth. That led to a nuclear winter (or a few years of ‘nuclear winters’) which killed the photosynthetic organisms, and then the herbivores didn’t have anything to eat so they died and then the carnivores didn’t have anything to eat and so they died. Or, as my 4-year-old puts it, “[in a solemn voice] after the asteroid hit, big dusty clouds blocked the sun; [in an ominous voice] each day was colder than the previous one and so, without sunlight to keep them alive [sad face, head cocked sideways], the poor dinosaurs could no longer survive [hands spread sideways, hung head] “. Yes, I am a proud parent. Now I have to do a sit-down with the child and explain that… What, exactly?

Well, The Atlantic article showcases the struggles of a scientist – paleontologist and geologist Gerta Keller – who doesn’t believe the mainstream asteroid hypothesis; rather she thinks there is enough evidence to point out that extreme volcano eruptions, like really extreme, thousands of times more powerful than anything we know in the recorded history, put out so much poison (soot, dust, hydrofluoric acid, sulfur, carbon dioxide, mercury, lead, and so on) in the atmosphere that, combined with the consequent dramatic climate change, killed the dinosaurs. The volcanoes were located in India and they erupted for hundreds of thousands of years, but most violent eruptions, Keller thinks, were in the last 40,000 years before the extinction. This hypothesis is called the Deccan volcanism from the region in India where these nasty volcanoes are located, first proposed by Vogt (1972) and Courtillot et al. (1986).

138- Vogt - Copy.jpg

So which is true? Or, rather, because this is science we’re talking about, which hypothesis is more supported by the facts: the volcanism or the impact?

The impact hypothesis was put forward in 1980 when Walter Alvarez, a geologist, noticed a thin layer of clay in rocks that were about 65 million years old, which coincided with the time when the dinosaurs disappeared. This layer is on the KT boundary (sometimes called K-T, K-Pg, or KPB, looks like the biologists are not the only ones with acronym problems) and marks the boundary between the Cretaceous and Paleogenic geological periods (T is for Triassic, yeah, I know). Walter asked his father, the famous Nobel Prize physicist Louis Alvarez, to take a look at it and see what it is. Alvarez Sr. analyzed it and decided that the clay contains a lot of iridium, dozens of times more than expected. After gathering more samples from Europe and New Zealand, they published a paper (Alvarez et al., 1980) in which the scientists reasoned that because Earth’s iridium is deeply buried in its bowels and not in its crust, this iridium at the K-Pg boundary is of extraterrestrial origin, which could be brought here only by an asteroid/comet. This is also the paper in which it was put forth for the first time the conjecture that the asteroid impact killed the dinosaurs, based on the uncanny coincidence of timing.

138-alvarez - Copy

The discovery of the Chicxulub crater in Mexico followed a more sinuous path because the geophysicists who first discovered it in the ’70s were working for an oil company, looking for places to drill. Once the dinosaur-died-due-to-asteroid-impact hypothesis gained popularity outside academia, the geologists and the physicists put two-and-two together, acquired more data, and published a paper (Hildebrand et al., 1991) where the Chicxulub crater was for the first time linked with the dinosaur extinction. Although the crater was not radiologically dated yet, they had enough geophysical, stratigraphic, and petrologic evidence to believe it was as old as the iridium layer and the dinosaur die-out.

138-chicxulub - Copy

But, devil is in the details, as they say. Keller published a paper in 2007 saying the Chicxulub event predates the extinction by some 300,000 years (Keller et al., 2007). She looked at geological samples from Texas and found the glass granule layer (indicator of the Chicxhulub impact) way below the K-Pg boundary. So what’s up with the iridium then? Keller (2014) believes that is not of extraterrestrial origin and it might well have been spewed up by a particularly nasty eruption or the sediments got shifted. Schulte et al. (2010), on the other hand, found high levels of iridium in 85 samples from all over the world in the KPG layer. Keller says that some other 260 samples don’t have iridium anomalies. As a response, Esmeray-Senlet et al. (2017) used some fancy Mass Spectrometry to show that the iridium profiles could have come only from Chicxulub, at least in North America. They argue that the variability in iridium profiles around the world is due to regional geochemical processes. And so on, and so on, the controversy continues.

Actual radioisotope dating was done a bit later in 2013: date of K-Pg is 66.043 ± 0.043 MA (millions of years ago), date of the Chicxulub crater is 66.038 ±.025/0.049 MA. Which means that the researchers “established synchrony between the Cretaceous-Paleogene boundary and associated mass extinctions with the Chicxulub bolide impact to within 32,000 years” (Renne et al., 2013), which is a blink of an eye in geological times.

138-66 chixhulub - Copy

Now I want you to understand that often in science, though by far not always, matters are not so simple as she is wrong, he is right. In geology, what matters most is the sample. If the sample is corrupted, so will be your conclusions. Maybe Keller’s or Renne’s samples were affected by a myriad possible variables, some as simple as shifting the dirt from here to there by who knows what event. After all, it’s been 66 million years since. Also, methods used are just as important and dating something that happened so long ago is extremely difficult due to intrinsic physical methodological limitations. Keller (2014), for example, claims that Renne couldn’t have possibly gotten such an exact estimation because he used Argon isotopes when only U-Pb isotope dilution–thermal ionization mass spectrometry (ID-TIMS) zircon geochronology could be so accurate. But yet again, it looks like he did use both, so… I dunno. As the over-used always-trite but nevertheless extremely important saying goes: more data is needed.

Even if the dating puts Chicxulub at the KPB, the volcanologists say that the asteroid, by itself, couldn’t have produced a mass extinction because there are other impacts of its size and they did not have such dire effects, but were barely noticeable at the biota scale. Besides, most of the other mass extinctions on the planet have been already associated with extreme volcanism (Archibald et al., 2010). On the other hand, the circumstances of this particular asteroid could have made it deadly: it landed in the hydrocarbon-rich areas that occupied only 13% of the Earth’s surface at the time which resulted in a lot of “stratospheric soot and sulfate aerosols and causing extreme global cooling and drought” (Kaiho & Oshima, 2017). Food for thought: this means that the chances of us, humans, to be here today are 13%!…

I hope that you do notice that these are very recent papers, so the issue is hotly debated as we speak.

It is possible, nay probable, that the Deccan volcanism, which was going on long before and after the extinction, was exacerbated by the impact. This is exactly what Renne’s team postulated in 2015 after dating the lava plains in the Deccan Traps: the eruptions intensified about 50,000 years before the KT boundary, from “high-frequency, low-volume eruptions to low-frequency, high-volume eruptions”, which is about when the asteroid hit. Also, the Deccan eruptions continued for about half a million years after KPB, “which is comparable with the time lag between the KPB and the initial stage of ecological recovery in marine ecosystems” (Renne et al., 2016, p. 78).

Since we cannot get much more accurate dating than we already have, perhaps the fossils can tell us whether the dinosaurs died abruptly or slowly. Because if they got extinct in a few years instead of over 50,000 years, that would point to a cataclysmic event. Yes, but which one, big asteroid or violent volcano? Aaaand, we’re back to square one.

Actually, the last papers on the matter points to two extinctions: the Deccan extinction and the Chicxulub extinction. Petersen et al., (2016) went all the way to Antarctica to find pristine samples. They noticed a sharp increase in global temperatures by about 7.8 ºC at the onset of Deccan volcanism. This climate change would surely lead to some extinctions, and this is exactly what they found: out of 24 species of marine animals investigated, 10 died-out at the onset of Deccan volcanism and the remaining 14 died-out when Chicxulub hit.

In conclusion, because this post is already verrrry long and is becoming a proper college review, to me, a not-a-geologist/paleontologist/physicist-but-still-a-scientist, things happened thusly: first Deccan traps erupted and that lead to a dramatic global warming coupled with spewing poison in the atmosphere. Which resulted in a massive die-out (about 200,000 years before the bolide impact, says a corroborating paper, Tobin, 2017). The surviving species (maybe half or more of the biota?) continued the best they could for the next few hundred thousand years in the hostile environment. Then the Chicxulub meteorite hit and the resulting megatsunami, the cloud of super-heated dust and soot, colossal wildfires and earthquakes, acid rain and climate cooling, not to mention the intensification of the Deccan traps eruptions, finished off the surviving species. It took Earth 300,000 to 500,000 years to recover its ecosystem. “This sequence of events may have combined into a ‘one-two punch’ that produced one of the largest mass extinctions in Earth history” (Petersen et al., 2016, p. 6).

138-timeline dinosaur - Copy

By Neuronicus, 25 August 2018

P. S. You, high school and college students who will use this for some class assignment or other, give credit thusly: Neuronicus (Aug. 26, 2018). The FIRSTS: The cause(s) of dinosaur extinction. Retrieved from https://scientiaportal.wordpress.com/2018/08/26/the-firsts-the-causes-of-dinosaur-extinction/ on [date]. AND READ THE ORIGINAL PAPERS. Ask me for .pdfs if you don’t have access, although with sci-hub and all… not that I endorse any illegal and fraudulent use of the above mentioned server for the purpose of self-education and enlightenment in the quest for knowledge that all academics and scientists praise everywhere around the Globe!

EDIT March 29, 2019. Astounding one-of-a-kind discovery is being brought to print soon. It’s about a site in North Dakota that, reportedly, has preserved the day of the Chicxhulub impact in amazing detail, with tons of fossils of all kinds (flora, mammals, dinosaurs, fish) which seems to put the entire extinction of dinosaurs in one day, thus favoring the asteroid impact hypothesis. The data is not out yet. Can’t wait til it is! Actually, I’ll have to wait some more after it’s out for the experts to examine it and then I’ll find out. Until then, check the story of the discovery here and here.

REFERENCES:

1. Alvarez LW, Alvarez W, Asaro F, & Michel HV (6 Jun 1980). Extraterrestrial cause for the cretaceous-tertiary extinction. PMID: 17783054. DOI: 10.1126/science.208.4448.1095 Science, 208(4448):1095-1108. ABSTRACT | FULLTEXT PDF

2. Archibald JD, Clemens WA, Padian K, Rowe T, Macleod N, Barrett PM, Gale A, Holroyd P, Sues HD, Arens NC, Horner JR, Wilson GP, Goodwin MB, Brochu CA, Lofgren DL, Hurlbert SH, Hartman JH, Eberth DA, Wignall PB, Currie PJ, Weil A, Prasad GV, Dingus L, Courtillot V, Milner A, Milner A, Bajpai S, Ward DJ, Sahni A. (21 May 2010) Cretaceous extinctions: multiple causes. Science,328(5981):973; author reply 975-6. PMID: 20489004, DOI: 10.1126/science.328.5981.973-aScience. FULL REPLY

3. Courtillot V, Besse J, Vandamme D, Montigny R, Jaeger J-J, & Cappetta H (1986). Deccan flood basalts at the Cretaceous/Tertiary boundary? Earth and Planetary Science Letters, 80(3-4), 361–374. doi: 10.1016/0012-821x(86)90118-4. ABSTRACT

4. Esmeray-Senlet, S., Miller, K. G., Sherrell, R. M., Senlet, T., Vellekoop, J., & Brinkhuis, H. (2017). Iridium profiles and delivery across the Cretaceous/Paleogene boundary. Earth and Planetary Science Letters, 457, 117–126. doi:10.1016/j.epsl.2016.10.010. ABSTRACT

5. Hildebrand AR, Penfield GT, Kring DA, Pilkington M, Camargo AZ, Jacobsen SB, & Boynton WV (1 Sept. 1991). Chicxulub Crater: A possible Cretaceous/Tertiary boundary impact crater on the Yucatán Peninsula, Mexico. Geology, 19 (9): 867-871. DOI: https://doi.org/10.1130/0091-7613(1991)019<0867:CCAPCT>2.3.CO;2. ABSTRACT

6. Kaiho K & Oshima N (9 Nov 2017). Site of asteroid impact changed the history of life on Earth: the low probability of mass extinction. Scientific Reports,7(1):14855. PMID: 29123110, PMCID: PMC5680197, DOI:10.1038/s41598-017-14199-x. . ARTICLE | FREE FULLTEXT PDF

7. Keller G, Adatte T, Berner Z, Harting M, Baum G, Prauss M, Tantawy A, Stueben D (30 Mar 2007). Chicxulub impact predates K–T boundary: New evidence from Brazos, Texas, Earth and Planetary Science Letters, 255(3–4): 339-356. DOI: 10.1016/j.epsl.2006.12.026. ABSTRACT

8. Keller, G. (2014). Deccan volcanism, the Chicxulub impact, and the end-Cretaceous mass extinction: Coincidence? Cause and effect? Geological Society of America Special Papers, 505:57–89. doi:10.1130/2014.2505(03) ABSTRACT

9. Petersen SV, Dutton A, & Lohmann KC. (5 Jul 2016). End-Cretaceous extinction in Antarctica linked to both Deccan volcanism and meteorite impact via climate change. Nature Communications, 7:12079. doi: 10.1038/ncomms12079. PMID: 27377632, PMCID: PMC4935969, DOI: 10.1038/ncomms12079. ARTICLE | FREE FULLTEXT PDF 

10. Renne PR, Deino AL, Hilgen FJ, Kuiper KF, Mark DF, Mitchell WS 3rd, Morgan LE, Mundil R, & Smit J (8 Feb 2013). Time scales of critical events around the Cretaceous-Paleogene boundary. Science, 8;339(6120):684-687. doi: 10.1126/science.1230492. PMID: 23393261, DOI: 10.1126/science.1230492 ABSTRACT 

11. Renne PR, Sprain CJ, Richards MA, Self S, Vanderkluysen L, Pande K. (2 Oct 2015). State shift in Deccan volcanism at the Cretaceous-Paleogene boundary, possibly induced by impact. Science, 350(6256):76-8. PMID: 26430116. DOI: 10.1126/science.aac7549 ABSTRACT

12. Schoene B, Samperton KM, Eddy MP, Keller G, Adatte T, Bowring SA, Khadri SFR, & Gertsch B (2014). U-Pb geochronology of the Deccan Traps and relation to the end-Cretaceous mass extinction. Science, 347(6218), 182–184. doi:10.1126/science.aaa0118. ARTICLE

13. Schulte P, Alegret L, Arenillas I, Arz JA, Barton PJ, Bown PR, Bralower TJ, Christeson GL, Claeys P, Cockell CS, Collins GS, Deutsch A, Goldin TJ, Goto K, Grajales-Nishimura JM, Grieve RA, Gulick SP, Johnson KR, Kiessling W, Koeberl C, Kring DA, MacLeod KG, Matsui T, Melosh J, Montanari A, Morgan JV, Neal CR, Nichols DJ, Norris RD, Pierazzo E,Ravizza G, Rebolledo-Vieyra M, Reimold WU, Robin E, Salge T, Speijer RP, Sweet AR, Urrutia-Fucugauchi J, Vajda V, Whalen MT, Willumsen PS.(5 Mar 2010). The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. Science, 327(5970):1214-8. PMID: 20203042, DOI: 10.1126/science.1177265. ABSTRACT

14. Tobin TS (24 Nov 2017). Recognition of a likely two phased extinction at the K-Pg boundary in Antarctica. Scientific Reports, 7(1):16317. PMID: 29176556, PMCID: PMC5701184, DOI: 10.1038/s41598-017-16515-x. ARTICLE | FREE FULLTEXT PDF 

15. Vogt, PR (8 Dec 1972). Evidence for Global Synchronism in Mantle Plume Convection and Possible Significance for Geology. Nature, 240(5380), 338–342. doi:10.1038/240338a0 ABSTRACT

The Benefits of Vacation

My prolonged Internet absence from the last month or so was due to a prolonged vacation. In Europe. Which I loved. Both the vacation and the Europe. Y’all, people, young and old, listen to me: do not neglect vacations for they strengthen the body, nourish the soul, and embolden the spirit.

More pragmatically, vacations lower the stress level. Yes, even the stressful vacations lower the stress level, because the acute stress effects of “My room is not ready yet” / “Jimmy puked in the car” / “Airline lost my luggage” are temporary and physiologically different from the chronic stress effects of “I’ll lose my job if I don’t meet these deadlines” / “I hate my job but I can’t quit because I need health insurance” / “I’m worried for my child’s safety” / “My kids will suffer if I get a divorce” / “I can’t make the rent this month”.

Chronic stress results in a whole slew of real nasties, like cognitive, learning, and memory impairments, behavioral changes, issues with impulse control, immune system problems, weight gain, cardiovascular disease and so on and so on and so on. Even death. As I told my students countless of times, chronic stress to the body is as real and physical as a punch in the stomach but far more dangerous. So take a vacation as often as you can. Even a few days of total disconnect help tremendously.

There are literally thousands of peer-reviewed papers out there that describe the ways in which stress produces all those bad things, but not so many papers about the effects of vacations. I suspect this is due to the inherent difficulty in accounting for the countless environmental variables that can influence one’s vacation and its outcomes, whereas identifying and characterizing stressors is much easier. In other words, lack of experimental control leads to paucity of good data. Nevertheless, from this paucity, Chen & Petrick (2013) carefully selected 98 papers from both academic and nonacademic publications about the benefits of travel vacations.

These are my take-home bullet-points:

  • vacation effects last no more than a month
  • vacations reduce both the subjective perception of stress and the objective measurement of it (salivary cortisol)
  • people feel happier after taking a vacation
  • there are some people who do not relax in a vacation, presumably because they cannot ‘detach’ themselves from the stressors in their everyday life (long story here why some people can’t let go of problems)
  • vacations lower the occurrence of cardiovascular disease
  • vacations decrease work-related stress, work absenteeism, & work burnout
  • vacations increase job performance
  • the more you do on a vacation the better you feel, particularly if you’re older
  • you benefit more if you do new things or go to new places instead of just staying home
  • vacations increase overall life satisfaction

Happy vacationing!

137 - Copy

REFERENCE: Chen, C-C & Petrick, JF (Nov. 2013, Epub 17 Jul. 2013). Health and Wellness Benefits of Travel Experiences: A Literature Review, Journal of Travel Research, 52(6):709-719. doi: 10.1177/0047287513496477. ARTICLE | FULLTEXT PDF via ResearchGate.

By Neuronicus, 20 July 2018

Is piracy the same as stealing?

Exactly 317 years ago, Captain William Kidd was tried and executed for piracy. Whether or not he was a pirate is debatable but what is not under dispute is that people do like to pirate. Throughout the human history, whenever there was opportunity, there was also theft. Wait…, is theft the same as piracy?

If we talk about Captain “Arrr… me mateys” sailing the high seas under the “Jolly Roger” flag, there is no legal or ethical dispute that piracy is equivalent with theft. But what about today’s digital piracy? Despite what the grieved parties may vociferously advocate, digital piracy is not theft because what is being stolen is a copy of the goodie, not the goodie itself therefore it is an infringement and not an actual theft. That’s from a legal standpoint. Ethically though…

For Eres et al. (2016), theft is theft, whether the object of thievery is tangible or not. So why are people who have no problem pirating information from the internet squeamish when it comes to shoplifting the same item?

First, is it true that people are more likely to steal intangible things than physical objects? A questionnaire involving 127 young adults revealed that yes, people of both genders are more likely to steal intangible items, regardless if they (the items) are cheap or expensive or the company that owned the item is big or small. Older people were less likely to pirate and those who already pirated were more likely to do so in the future.

136 piracy - Copy

In a different experiment, Eres et al. (2016) stuck 35 people in the fMRI and asked them to imagine the tangibility (e.g., CD, Book) or intangibility (e.g., .pdf, .avi) of some items (e.g., book, music, movie, software). Then they asked the participants how they would feel after they would steal or purchase these items.

People were inclined to feel more guilty if the item was illegally obtained, particularly if the object was tangible, proving that, at least from an emotional point of view, stealing and infringement are two different things. An increase in the activation the left lateral orbitofrontal cortex (OFC) was seen when the illegally obtained item was tangible. Lateral OFC is a brain area known for its involvement in evaluating the nature of punishment and displeasurable information. The more sensitive to punishment a person is, the more likely it is to be morally sensitive as well.

Or, as the authors put it, it is more difficult to imagine intangible things vs. physical objects and that “difficulty in representing intangible items leads to less moral sensitivity when stealing these items” (p. 374). Physical items are, well…, more physical, hence, possibly, demanding a more immediate attention, at least evolutionarily speaking.

(Divergent thought. Some studies found that religious people are less socially moral than non-religious. Could that be because for the religious the punishment for a social transgression is non-existent if they repent enough whereas for the non-religious the punishment is immediate and factual?)

136 ofc - Copy

Like most social neuroscience imaging studies, this one lacks ecological validity (i.e., people imagined stealing, they did not actually steal), a lacuna that the authors are gracious enough to admit. Another drawback of imaging studies is the small sample size, which is to blame, the authors believe, for failing to see a correlation between the guilt score and brain activation, which other studies apparently have shown.

A simple, interesting paper providing food for thought not only for the psychologists, but for the law makers and philosophers as well. I do not believe that stealing and infringement are the same. Legally they are not, now we know that emotionally they are not either, so shouldn’t they also be separated morally?

And if so, should we punish people more or less for stealing intangible things? Intuitively, because I too have a left OFC that’s less active when talking about transgressing social norms involving intangible things, I think that punishment for copyright infringement should be less than that for stealing physical objects of equivalent value.

But value…, well, that’s where it gets complicated, isn’t it? Because just as intangible as an .mp3 is the dignity of a fellow human, par example. What price should we put on that? What punishment should we deliver to those robbing human dignity with impunity?

Ah, intangibility… it gets you coming and going.

I got on this thieving intangibles dilemma because I’m re-re-re-re-re-reading Feet of Clay, a Discworld novel by Terry Pratchett and this quote from it stuck in my mind:

“Vimes reached behind the desk and picked up a faded copy of Twurp’s Peerage or, as he personally thought of it, the guide to the criminal classes. You wouldn’t find slum dwellers in these pages, but you would find their landlords. And, while it was regarded as pretty good evidence of criminality to be living in a slum, for some reason owning a whole street of them merely got you invited to the very best social occasions.”

REFERENCE: Eres R, Louis WR, & Molenberghs P (Epub 8 May 2016, Pub Aug 2017). Why do people pirate? A neuroimaging investigation. Social Neuroscience, 12(4):366-378. PMID: 27156807, DOI: 10.1080/17470919.2016.1179671. ARTICLE 

By Neuronicus, 23 May 2018

How to wash SOME pesticides off produce

While EU is moving on with legislation to curtail harmful chemicals from our food, water, and air, USA is taking a few steps backwards. The most recent de-regulation concerns chlorphyrifos (CFP), a horrible pesticide banned in EU in 2008 (and in most of the world. China also prohibited its use on produce in 2016). CFP is associated with serious neurodevelopmental defects in humans and is highly toxic to the wildlife, particularly bees.

The paper that I’m covering today wanted to see if there is anything the consumer can do about pesticides in their produce. Unfortunately, they did not look at CFP. And why would they? At the time this study was conducted they probably thought, like the rest of us, that CFP is over and done with [breathe, slowly, inhale, exhale, repeat, focus].

Yang et al. (2017) bought organic Gala apples and then exposed them to two common pesticides: thiabendazole and phosmet (an organophosphate) at doses commonly used by farmers (125 ng/cm2). Then they washed the apples in three solutions: sodium bicarbonate (baking soda, NaHCO3, with the concentration of 10 mg/mL), Clorox (germicidal bleach with the concentration of 25 mg/L available chlorine) and tap water.

Before and after the washes the researchers used surface-enhanced Raman spectroscopy (which is, basically, a special way of doing microscopy) to take a closer look at the apples.

They found out that:

1) “Surface pesticide residues were most effectively removed by sodium bicarbonate (baking soda, NaHCO3) solution when compared to either tap water or Clorox bleach” (abstract).

2) The more you wash the more pesticide you remove. If you immerse apples in backing soda for 12 minutes for thiabendazole and 15 minutes for phosmet and then rinse with water there will be no detectable residue of these pesticides on the surface.

3) “20% of applied thiabendazole and 4.4% of applied phosmet penetrated into apples” (p. 9751) which cannot be removed by washing. Thiabendazole penetrates into the apple up to 80μ, which is four times more than phosmet (which goes up top 20 μm).

4) “the standard postharvest washing method with Clorox bleach solution for 2 min did not effectively remove surface thiabendazole” (p. 9748).

5) Phosmet is completely degraded by baking soda, whereas thiabenzole appears to be only partially so.

True to my nitpicking nature, I wish that the authors washed the apples in tap water for 8 minutes, not 2, like they did for Clorox and baking soda in the internal pesticide residue removal experiment. Nevertheless, the results stand as they are robust and their detection method is ultrasensitive being able to detect thiabendazole as low as 2μg/L and phosmet as low as 10 μg/L.

Thiabendazole is a pesticide that works by interfering with a basic enzymatic reaction in anaerobic respiration. I’m an aerobe so I shouldn’t worry about this pesticide too much unless I get a huge dose of it and then it is poisonous and carcinogenic, like most things in high doses. Phosmet, on the other hand, is an acetylcholinesterase (AChE) inhibitor (AChEI), meaning its effects in humans are akin to cholinergic poisoning. Normally, acetylcholine (ACh) binds to its muscarinic and nicotinic receptors in your muscles and brain for proper functioning of same. AChE breaks down ACh when is not needed any more by said muscles and brain. Therefore, an AChEI stops AChE from breaking down ACh resulting in overall more ACh than it’s good for you. Meaning it can kill you. Phosmet’s effects, in addition to, well…, death from acute poisoning, include trouble breathing, muscle weakness or tension, convulsions, anxiety, paralysis, quite possible memory, attention, and thinking impairments. Needles to say, it’s not so great for child development either. Think nerve gas, which is also an AChEI, and you’ll get a pretty good picture. Oh, it’s also a hormone mimicker.

I guess I’m back buying organic again. Long ago I have been duped for a short while into buying organic produce for my family believing, like many others, that it is pesticide-free. And, like many others, I was wrong. Just a bit of PubMed search told me that some of the “organic” pesticides are quite unpleasant. But I’ll take copper sulfate over chlorphyrifos any day. The choice is not from healthy to unhealthy but from bad to worse. I know, I know, the paper is not about CFP. I have a lot of pet peeves, alright?

Meanwhile, I gotta go make a huge batch of baking soda solution. Thanks, Yang et al. (2017)!

135 pesticide on apple - Copy

REFERENCE: Yang T, Doherty J, Zhao B, Kinchla AJ, Clark JM, & He L (8 Nov 2017, Epub 25 Oct 2017). Effectiveness of Commercial and Homemade Washing Agents in Removing Pesticide Residues on and in Apples. Journal of Agricultural and Food Chemistry, 65(44):9744-9752. PMID: 29067814, doi: 10.1021/acs.jafc.7b03118. ARTICLE

By Neuronicus, 19 May 2018

Treatment for lupus

Science has trends, like everything else. Some are longer or shorter lived, depending on how many astonishing discoveries are linked to that given subject. The 2000’s were unquestionably the years of the DNA. Many a grant have been written (and granted) for whole-genome surveys of this and that. Alternative splicing followed. The ’10s saw the rise of various -omics: transcriptomics, metabolomics, proteomics etc. Then everybody and his mamma got on the cart of epigenetics. With a side of immune stuff. Now, move aside epigenetics, here comes the microbiome. And CRISPR.

That is not to say that the not so hip subjects of the bygone years are thoroughly squeezed of knowledge and we throw them aside like some dry dead end and never touch them again. Not at all, not a bit. The trends only mark the momentary believes of the purse holders about which direction the next panaceum universalis will jump from.

Here comes a groundbreaking paper on the gut microbiome. It’s groundbreaking because it comes with a cure for systemic lupus erythematosus (SLE). Possibly autoimmune hepatitis and others autoimmune diseases as well.

An autoimmune disease is a terrible malady that is often incurable and sometimes deadly. It happens when the immune system starts attacking the body. One hypothesis as to why that happens posits that after a particular infection, maybe a particularly nasty one, the immune system doesn’t stop attacking, but now in the absence of an enemy it turns on its own body in genetically susceptible individuals.

Vieira et al. (2018) worked with genetically susceptible mice. And the bombshell comes right there in the first page: after treatment with an oral antibiotic (vancomycin or ampicillin, but not neomycin), mice genetically designed to develop lupus had lower “mortality, lupus-related autoantibodies, and autoimmune manifestations” (p. 1156). Then the researchers took a closer look at the bodies of these mice and observed that 82% of the mice had spleens and livers infected with Enterococcus gallinarum, a gut bacterium that should stay in the gut. But this bacterium is capable of weakening the gut barriers by loosening the tightness of the junctions between gut cells and then migrate to liver, spleen, and lymph nodes. Its high abundance in these places triggers a systemic immune response. Then the authors force-fed some germ-free mice with E. galinarum and saw that the mice developed systemic autoimmune pathology.

As if that’s not enough of a news story, the researchers developed a vaccine against this bacterium. The vaccine is very specific (being made of heat-killed E. gallinarum) and results in reduced levels of serum autoantibodies and prolonged survival rate in the lupus-prone mice.

So people don’t quibble, and rightly so, that those are rodents and humans are not (well, most of them, anyway), the authors looked at the liver biopsies of three humans with SLE and five with autoimmune hepatitis (AIH). They were positive for E. gallinarum, but the controls, i. e. healthy humans, were not. Also, when healthy human liver cells were stimulated with E. gallinarum they displayed autoimmune responses, just like in the murine cells. Finally, you don’t have to undergo a liver biopsy to see if you’re infected with E. gallinarum, just a specific blood test to see if you have increased antibody titers against this bug (or its RNA) as most SLE and AIH patients did.

Needless to say, I am extremely happy with this paper. Who wouldn’t be?! It’s a cure paper! I know, I know, they don’t say that, but what does this sound to you?:

“Administration of oral vancomycin or an intramuscular vaccine against E. gallinarum prevent translocation, Th17/Tfh cell induction, autoantibody production and autoimmune-related mortality (Supplemental, p. 62).”

Call it a very promising cure or a highly effective treatment if you like, but it stares you in the face for what it is as it did the researchers who already patented their stuff and are currently conducting clinical trials.

Most of the paper is in the Supplemental material, not in the 4 pages and a bit in Science. So even if the paper is under the paywall, the Supplementals are not. Be ready for a 71 page worth of 167 MB of data though.

134 lupus treat.jpg

REFERENCE: Manfredo Vieira S, Hiltensperger M, Kumar V, Zegarra-Ruiz D, Dehner C, Khan N, Costa FRC, Tiniakou E, Greiling T, Ruff W, Barbieri A, Kriegel C, Mehta SS, Knight JR, Jain D, Goodman AL, Kriegel MA (9 Mar 2018). Translocation of a gut pathobiont drives autoimmunity in mice and humans.  Science, 359(6380):1156-1161. doi: 10.1126/science.aar7201. PMID: 29590047, DOI: 10.1126/science.aar7201. ARTICLE |  Supplemental Material | Yale press release

By Neuronicus, 8 April 2018

NASA, not media, is to blame for the Twin Study 7% DNA change misunderstanding

In case the title threw you out of the loop, let me pull you back in. In 2015, NASA sent Scott Kelly to the International Space Station while his twin brother, Mark, stayed on the ground. When Scott came back, NASA ran a bunch of tests on them to see how space affects human body. Some of the findings were published a few weeks ago. Among the findings, one caught the eyes of media who ran stories like:  Astronaut Scott Kelly now has different DNA to his identical twin brother after spending just a year in space (Daily Mail), Astronaut’s DNA no longer matches identical twin’s after time in space, NASA finds (Channel 3), Astronaut Scott Kelly’s genes show long-term changes after a year in space (NBC), Astronaut Scott Kelly is no longer an identical twin: How a year in space altered his DNA (Fox News), Scott Kelly Spent a Year in Space and Now His DNA Is Different From His Identical Twin’s (Time),  Nasa astronaut twins Scott and Mark Kelly no longer genetically identical after space trip (Telegraph), Astronaut’s DNA changes after spending year in space when compared to identical twin bother (The Independent), Astronaut Scott Kelly’s DNA No Longer Matches Identical Twin’s After a Year in Space (People), NASA study: Astronaut’s DNA no longer identical to his identical twin’s after year in space (The Hill), NASA astronaut who spent a year in space now has different DNA from his twin (Yahoo News), Scott Kelly: NASA Twins Study Confirms Astronaut’s DNA Actually Changed in Space (Newsweek), If you go into space for a long time, you come back a genetically different person (Quartz), Space can change your DNA, we just learned (Salon), NASA Confirms Scott Kelly’s Genes Have Been Altered By Space Travel (Tech Times), even ScienceAlert 😦 ran Scott Kelly’s DNA Is No Longer Identical to His Twin’s After a Year in Space.  And dozens and dozens more….

Even the astronauts themselves said their DNA is different and they are no longer twins:

133 mark kelly - Copy133 scott kelly - Copy

Alas, dear Scott & Mark Kelly, rest assured that despite these titles and their afferent stories, you two share the same DNA, still & forever. You are still identical twins until one of you changes species. Because that is what 7% alteration in human DNA means: you’re not human anymore.

So what gives?

Here is the root of all this misunderstanding:

“Another interesting finding concerned what some call the “space gene”, which was alluded to in 2017. Researchers now know that 93% of Scott’s genes returned to normal after landing. However, the remaining 7% point to possible longer term changes in genes related to his immune system, DNA repair, bone formation networks, hypoxia, and hypercapnia” (excerpt from NASA’s press release on the Twin Study on Jan 31, 2018, see reference).

If I wouldn’t know any better I too would think that yes, the genes were the ones who have changed, such is NASA’s verbiage. As a matter of actual fact, it is the gene expression which changed. Remember that DNA makes RNA and RNA makes protein? That’s the central dogma of molecular biology. A sequence of DNA that codes for a protein is called a gene. Those sequences do not change. But when to make a protein, how much protein, in what way, where to make this protein, which subtly different kinds of protein to make (alternative splicing), when not to make that protein, etc. is called the expression of that gene. And any of these aspects of gene expression are controlled or influenced by a whole variety of factors, some of these factors being environmental and as drastic as going to space or as insignificant as going to bed.

Some more scientifically inclined writers understood that the word “expression” was conspicuously missing from the above-mentioned paragraph and either ran clarification titles like After A Year In Space, NASA Astronaut’s Gene Expression Has Changed. Possibly Forever. (Huffington Post) or up-front rebukes like No, space did not permanently alter 7 percent of Scott Kelly’s DNA (The Verge) or No, Scott Kelly’s Year in Space Didn’t Mutate His DNA (National Geographic).

Now, I’d love, LOVE, I tell you, to jump to the throat of the media on this one so I can smugly show how superior my meager blog is when it comes to accuracy. But, I have to admit, this time is NASA’s fault. Although it is not NASA’s job to teach the central dogma of molecular biology to the media, they are, nonetheless, responsible for their own press releases. In this case, Monica Edwards and Laurie Abadie from NASA Human Research Strategic Communications did a booboo, in the words of the Sit-Com character Sheldon Cooper. Luckily for these two employees, the editor Timothy Gushanas published this little treat yesterday, right at the top of the press release:

“Editor’s note: NASA issued the following statement updating this article on March 15, 2018:

Mark and Scott Kelly are still identical twins; Scott’s DNA did not fundamentally change. What researchers did observe are changes in gene expression, which is how your body reacts to your environment. This likely is within the range for humans under stress, such as mountain climbing or SCUBA diving.

The change related to only 7 percent of the gene expression that changed during spaceflight that had not returned to preflight after six months on Earth. This change of gene expression is very minimal.  We are at the beginning of our understanding of how spaceflight affects the molecular level of the human body. NASA and the other researchers collaborating on these studies expect to announce more comprehensive results on the twins studies this summer.”

Good for you for rectifying your mistake, NASA! And good for you too the few media outlets that corrected their story like CNN who changed their title from Astronaut’s DNA no longer same as his identical twin, NASA finds to Astronaut’s gene expression no longer same as his identical twin, NASA finds.

But, seriously, NASA, what’s up with you guys keep screwing up molecular biology stuff?! Remember the arsenic-loving bacteria debacle? That paper is still not retracted  and that press release is still up on your website! Ntz, ntz, for shame… NASA, you need better understanding of basic science and/or better #Scicomm in your press releases. Hiring? I’m offering!

133 nasa twins - Copy (2)

REFERENCE: NASA. Edwards, M.  & Abadie, L. (Jan. 31, 2018). NASA Twins Study Confirms Preliminary Findings, Ed. Timothy Gushanas, retrieved on March 14,15, & 16, 2018. Address: https://www.nasa.gov/feature/nasa-twins-study-confirms-preliminary-findings

By Neuronicus, 16 March 2018

P.S. Sometimes is a pain to be obsessed with accuracy (cue in smallest violins). For example, I cannot stop myself from adding something just to be scrupulously correct. Since the day they were conceived, identical twins’ DNAs are starting to diverge. There are all sorts of things that do change the actual sequence of DNA. DNA can be damaged by radiation (which you can get a lot of in space) or exposure to some chemicals. Other changes are simply due to random mutations. So no twins are exactly identical, but the changes are so minuscule, nowhere near 1%, let alone 7%, that it is safe to say that their DNA is identical.

P.P.S. With all this hullabaloo about the 7% DNA change everybody glossed over and even I forgot to mention the one finding that is truly weird: the elongation of telomeres for Scott, the one that was in space. Telomeres are interesting things, they are repetitive sequences of DNA (TTAGGG/AATCCC) at the end of the chromosomes that are repeated thousands of times. The telomere’s job is to protect the end of the chromosomes. You see, every time a cell divides the DNA copying machinery cannot copy the last bits of the chromosome (blame it on physics or chemistry, one of them things) and so some of it is lost. So evolution came up with a solution: telomeres, which are bits of unusable DNA that can be safely ignored and left behind. Or so we think at the moment. The length of telomeres has been implicated in some curious things, like cancer and life-span (immortality thoughts, anyone?). The most common finding is the shortening of telomeres associated with stress, but Scott’s were elongated, so that’s the first weird thing. I didn’t even know the telomeres can get elongated in living healthy adult humans. But wait, there is more: NASA said that “the majority of those telomeres shortened within two days of Scott’s return to Earth”. Now that is the second oddest thing! If I would be NASA that’s where I would put my money on, not on the gene expression patterns. And I would really, really like to see which types of cells show those longer telomeres, because I have a hunch is some type of dermal cell, which may be ontogenetically related to a neuronal cell.

In Memoriam: Stephen Hawking

Yesterday, March 14, 2018, we lost a great mind and a decent human being. Thank you Dr. Stephen Hawking for showing us the Universe, the small and the big.

hawking2018 - Copy

I added his seminal doctoral thesis on the Free Resources page.

By Neuronicus, 15 March 2018