High fructose corn syrup IS bad for you

Because I cannot leave controversial things well enough alone – at least not when I know there shouldn’t be any controversy – my ears caught up with my tongue yesterday when the latter sputtered: “There is strong evidence for eliminating sugar from commonly used food products like bread, cereal, cans, drinks, and so on, particularly against that awful high fructose corn syrup”. “Yeah? You “researched” that up, haven’t you? Google is your bosom friend, ain’t it?” was the swift reply. Well, if you get rid of the ultra-emphatic air-quotes flanking the word ‘researched’ and replace ‘Google’ with ‘Pubmed’, then, yes, I did researched it and yes, Pubmed is my bosom friend.

Initially, I wanted to just give you all a list with peer-reviewed papers that found causal and/or correlational links between high fructose corn syrup (HFCS) and weight gain, obesity, type 2 diabetes, cardiovascular disease, fatty liver disease, metabolic and endocrine anomalies and so on. But there are way too many of them; there are over 500 papers on the subject in Pubmed only. And most of them did find that HFCS does nasty stuff to you, look for yourselves here. Then I thought to feature a paper showing that HFCS is differently metabolized than the fructose from fruits, because I keep hearing that lie perpetrated by the sugar and corn industries that “sugar is sugar” (no, it’s not! Demonstrably so!), but I doubt my yesterday’s interlocutor would care about liver’s enzymatic activity and other chemical processes with lots of acronyms. So, finally, I decided to feature a straight forward, no-nonsense paper, published recently, done at a top tier university, with human subjects, so I won’t hear any squabbles.

Price et al. (2018) studied 49 healthy subjects aged age 18–40 yr, of normal and stable body weight, and free from confounding medications or drugs, whose physical activity and energy-balanced meals were closely monitored. During the study, the subjects’ food and drink intake as well as their timing were rigorously controlled. The researchers varied only the beverages between groups, in such a way that one group received a drink sweetened with HFCS-55 (55% fructose, 45% glucose, as the one used in commercially available drinks) with every controlled meal, whereas the other group received an identical drink in size (adjusted for their energy requirements in such a way that it provided the same 25% of it), but sweetened with aspartame. The study lasted two weeks. No other beverage was allowed, including fruit juice. Urine samples were collected daily and blood samples 4 times per day.

There was a body weight increase of 810 grams (1.8 lb) in subjects consuming HFCS-sweetened beverages for 2 weeks when compared with aspartame controls. The researches also found differences in the levels of a whole host of acronyms (ppTG, ApoCIII, ApoE, OEA, DHEA, DHG, if you must know) involved in a variety of nasty things, like obesity, fatty liver disease, atherosclerosis, cardiovascular disease, stroke, diabetes, even Alzheimer’s.

This study is the third part of a larger NIH-funded study which investigates the metabolic effects of consuming sugar-sweetened beverages in about 200 participants over 5 years, registered at clinicaltrials.gov as NCT01103921. The first part (Stanhope et al., 2009) reported that consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans” (title), and the second part (Stanhope et al., 2015) found that “consuming beverages containing 10%, 17.5%, or 25% of energy requirements from HFCS produced dose-dependent increases in circulating lipid/lipoprotein risk factors for cardiovascular disease and uric acid within 2 weeks” (Abstract). They also found a dose-dependant increase in body weight, but in those subjects the results were not statistically significant (p = 0.09) after correcting for multiple comparisons. But I’ll bet that if/when the authors will publish all the data in one paper at the end of clinical trials they will have more statistical power and the trend in weight gain more obvious, as in the present paper.  Besides, it looks like there may be more than three parts to this study anyway.

The adverse effects of a high sugar diet, particularly in HFCS, are known to so many researchers in the field that they have been actually compiled in a name: the “American Lifestyle-Induced Obesity Syndrome model, which included consumption of a high-fructose corn syrup in amounts relevant to that consumed by some Americans” (Basaranoglu et al., 2013). It doesn’t refer only to increases in body weight, but also type 2 diabetes, cardiovascular disease, hypertriglyceridemia, fatty liver disease, atherosclerosis, gout, etc.

The truly sad part is that avoiding added sugars in diets in USA is impossible unless you do all – and I mean all – your cooking home, including canning, jamming, bread-making, condiment-making and so on, not just “Oh, I’ll cook some chicken or ham tonight” because in that case you end up using canned tomato sauce (which has added sugar), bread crumbs (which have added sugar), or ham (which has added sugar), salad dressing (which has sugar) and so on. Go on, check your kitchen and see how many ingredients have sugar in them, including any meat products short of raw meat. If you never read the backs of the bottles, cans, or packages, oh my, are you in for a big surprise if you live in USA…

There are lot more studies out there on the subject, as I said, of various levels of reading difficulty. This paper is not easy to read for someone outside the field, that’s for sure. But the main gist of it is in the abstract, for all to see.

150 hfcs - Copy

P.S. 1. Please don’t get me wrong: I am not against sugar in desserts, let it be clear. Nobody makes a more mean sweetalicious chocolate cake or carbolicious blueberry muffin than me (I coined those), as I have been reassured many times. But I am against sugar in everything. You know I haven’t found in any store, including high-end and really high-end stores a single box of cereal of any kind without sugar? Just for fun, I’d like to be a daredevil and try it once. But there ain’t. Not in USA, anyway. I did find them in EU though. But I cannot keep flying over the Atlantic in the already crammed at premium luggage space unsweetened corn flakes from Europe which are probably made locally, incidentally and ironically, with good old American corn.

P.S. 2 I am not so naive, blind, or zealous to overlook the studies that did not find any deleterious effects of HFCS consumption. Actually, I was on the fence about HFCS until about 10 years ago when the majority of papers (now overwhelming majority) was showing that HFCS consumption not only increases weight gain, but it can also lead to more serious problems like the ones mentioned above. Or the few papers that say all added sugar is bad, but HFCS doesn’t stand out from the other sugars when it comes to disease or weight gain. But, like with most scientific things, the majority has it its way and I bow to it democratically until the new paradigm shift. Besides, the exposés of Kearns et al. (2016a, b, 2017) showing in detail and with serious documentation how the sugar industry paid prominent researchers for the past 50 years to hide the deleterious effects of added sugar (including cancer!) further cemented my opinion about added sugar in foods, particularly HFCS.

References:

  1. Price CA, Argueta DA, Medici V, Bremer AA, Lee V, Nunez MV, Chen GX, Keim NL, Havel PJ, Stanhope KL, & DiPatrizio NV (1 Aug 2018, Epub 10 Apr 2018). Plasma fatty acid ethanolamides are associated with postprandial triglycerides, ApoCIII, and ApoE in humans consuming a high-fructose corn syrup-sweetened beverage. American Journal of Physiology. Endocrinology and Metabolism, 315(2): E141-E149. PMID: 29634315, PMCID: PMC6335011 [Available on 2019-08-01], DOI: 10.1152/ajpendo.00406.2017. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Medici V2, Bremer AA2, Lee V2, Lam HD2, Nunez MV2, Chen GX2, Keim NL2, Havel PJ (Jun 2015, Epub 22 Apr 2015). A dose-response study of consuming high-fructose corn syrup-sweetened beverages on lipid/lipoprotein risk factors for cardiovascular disease in young adults. The American Journal of Clinical Nutrition, 101(6):1144-54. PMID: 25904601, PMCID: PMC4441807, DOI: 10.3945/ajcn.114.100461. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Schwarz JM, Keim NL, Griffen SC, Bremer AA, Graham JL, Hatcher B, Cox CL, Dyachenko A, Zhang W, McGahan JP, Seibert A, Krauss RM, Chiu S, Schaefer EJ, Ai M, Otokozawa S, Nakajima K, Nakano T, Beysen C, Hellerstein MK, Berglund L, Havel PJ (May 2009, Epub 20 Apr 2009). Consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans. The Journal of Clinical Investigation,119(5):1322-34. PMID: 19381015, PMCID: PMC2673878, DOI:10.1172/JCI37385. ARTICLE | FREE FULTEXT PDF

(Very) Selected Bibliography:

Bocarsly ME, Powell ES, Avena NM, Hoebel BG. (Nov 2010, Epub 26 Feb 2010). High-fructose corn syrup causes characteristics of obesity in rats: increased body weight, body fat and triglyceride levels. Pharmacology, Biochemistry, and Behavior, 97(1):101-6. PMID: 20219526, PMCID: PMC3522469, DOI: 10.1016/j.pbb.2010.02.012. ARTICLE | FREE FULLTEXT PDF

Kearns CE, Apollonio D, Glantz SA (21 Nov 2017). Sugar industry sponsorship of germ-free rodent studies linking sucrose to hyperlipidemia and cancer: An historical analysis of internal documents. PLoS Biology, 15(11):e2003460. PMID: 29161267, PMCID: PMC5697802, DOI: 10.1371/journal.pbio.2003460. ARTICLE | FREE FULTEXT PDF

Kearns CE, Schmidt LA, Glantz SA (1 Nov 2016). Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents. JAMA Internal Medicine, 176(11):1680-1685. PMID: 27617709, PMCID: PMC5099084, DOI: 10.1001/jamainternmed.2016.5394. ARTICLE | FREE FULTEXT PDF

Mandrioli D, Kearns CE, Bero LA (8 Sep 2016). Relationship between Research Outcomes and Risk of Bias, Study Sponsorship, and Author Financial Conflicts of Interest in Reviews of the Effects of Artificially Sweetened Beverages on Weight Outcomes: A Systematic Review of Reviews. PLoS One, 11(9):e0162198.PMID: 27606602, PMCID: PMC5015869, DOI: 10.1371/journal.pone.0162198. ARTICLE | FREE FULTEXT PDF

By Neuronicus, 22 March 2019

The Mom Brain

Recently, I read an opinion titled When I Became A Mother, Feminism Let Me Down. The gist of it was that some feminists, while empowering women and girls to be anything they want to be and to do anything a man or a boy does, they fail in uplifting the motherhood aspect of a woman’s life, should she choose to become a mother. In other words, even (or especially, in some cases) feminists look down on the women who chose to switch from a paid job and professional career to an unpaid stay-at-home mom career, as if being a mother is somehow beneath what a woman can be and can achieve. As if raising the next generation of humans to be rational, informed, well-behaved social actors instead of ignorant brutal egomaniacs is a trifling matter, not to be compared with the responsibilities and struggles of a CEO position.

Patriarchy notwithstanding, a woman can do anything a man can. And more. The ‘more’ refers to, naturally, motherhood. Evidently, fatherhood is also a thing. But the changes that happen in a mother’s brain and body during pregnancy, breastfeeding, and postpartum periods are significantly more profound than whatever happens to the most loving and caring and involved father.

Kim (2016) bundled some of these changes in a nice review, showing how these drastic and dramatic alterations actually have an adaptive function, preparing the mother for parenting. Equally important, some of the brain plasticity is permanent. The body might spring back into shape if the mother is young or puts into it a devilishly large amount of effort, but some brain changes are there to stay. Not all, though.

One of the most pervasive findings in motherhood studies is that hormones whose production is increased during pregnancy and postpartum, like oxytocin and dopamine, sensitize the fear circuit in the brain. During the second trimester of pregnancy and particularly during the third, expectant mothers start to be hypervigilent and hypersensitive to threats and to angry faces. A higher anxiety state is characterized, among other things, by preferentially scanning for threats and other bad stuff. Threats mean anything from the improbable tiger to the 1 in a million chance for the baby to be dropped by grandma to the slightly warmer forehead or the weirdly colored poopy diaper. The sensitization of the fear circuit, out of which the amygdala is an essential part, is adaptive because it makes the mother more likely to not miss or ignore her baby’s cry, thus attending to his or her needs. Also, attention to potential threats is conducive to a better protection of the helpless infant from real dangers. This hypersensitivity usually lasts 6 to 12 months after childbirth, but it can last lifetime in females already predisposed to anxiety or exposed to more stressful events than average.

Many new mothers worry if they will be able to love their child as they don’t feel this all-consuming love other women rave about pre- or during pregnancy. Rest assured ladies, nature has your back. And your baby’s. Because as soon as you give birth, dopamine and oxytocin flood the body and the brain and in so doing they modify the reward motivational circuit, making new mothers literally obsessed with their newborn. The method of giving birth is inconsequential, as no differences in attachment have been noted (this is from a different study). Do not mess with mother’s love! It’s hardwired.

Another change happens to the brain structures underlying social information processing, like the insula or fusiform gyrus, making mothers more adept at self-motoring, reflection, and empathy. Which is a rapid transformation, without which a mother may be less accurate in understanding the needs, mental state, and social cues of the very undeveloped ball of snot and barf that is the human infant (I said that affectionately, I promise).

In order to deal with all these internal changes and the external pressures of being a new mom the brain has to put up some coping mechanisms. (Did you know, non-parents, that for the first months of their newborn lives, the mothers who breastfeed must do so at least every 4 hours? Can you imagine how berserk with sleep deprivation you would be after 4 months without a single night of full sleep but only catnaps?). Some would be surprised to find out – not mothers, though, I’m sure – that “new mothers exhibit enhanced neural activation in the emotion regulation circuit including the anterior cingulate cortex, and the medial and lateral prefrontal cortex” (p. 50). Which means that new moms are actually better at controlling their emotions, particularly at regulating negative emotional reactions. Shocking, eh?

140 mom brain1 - Copy

Finally, it appears that very few parts of the brain are spared from this overhaul as the entire brain of the mother is first reduced in size and then it grows back, reorganized. Yeah, isn’t that weird? During pregnancy the brain shrinks, being at its lowest during childbirth and then starts to grow again, reaching its pre-pregnancy size 6 months after childbirth! And when it’s back, it’s different. The brain parts heavily involved in parenting, like the amygdala involved in the anxiety, the insula and superior temporal gyrus involved in social information processing and the anterior cingulate gyrus involved in emotional regulation, all these show increased gray matter volume. And many other brain structures that I didn’t list. One brain structure is rarely involved only in one thing so the question is (well, one of them) what else is changed about the mothers, in addition to their increased ability to parent?

I need to add a note here: the changes that Kim (2016) talks about are averaged. That means some women get changed more, some less. There is variability in plasticity, which should be a pleonasm. There is also variability in the human population, as any mother attending a school parents’ night-out can attest. Some mothers are paranoid with fear and overprotective, others are more laissez faire when it comes to eating from the floor.

But SOME changes do occur in all mothers’ brains and bodies. For example, all new mothers exhibit a heightened attention to threats and subsequent raised levels of anxiety. But when does heightened attention to threats become debilitating anxiety? Thanks to more understanding and tolerance about these changes, more and more women feel more comfortable reporting negative feelings after childbirth so that now we know that postpartum depression, which happens to 60 – 80% of mothers, is a serious matter. A serious matter that needs serious attention from both professionals and the immediate social circle of the mother, both for her sake as well as her infant’s. Don’t get me wrong, we – both males and females – still have a long way ahead of us to scientifically understand and to socially accept the mother brain, but these studies are a great start. They acknowledge what all mothers know: that they are different after childbirth than the way they were before. Now we have to figure out how are they different and what can we do to make everyone’s lives better.

Kim (2016) is an OK review, a real easy read, I recommend it to the non-specialists wholeheartedly; you just have to skip the name of the brain parts and the rest is pretty clear. It is also a very short review, which will help with reader fatigue. The caveat of that is that it doesn’t include a whole lotta studies, nor does it go in detail on the implications of what the handful cited have found, but you’ll get the gist of it. There is a vastly more thorough literature if one would include animal studies that the author, curiously, did not include. I know that a mouse is not a chimp is not a human, but all three of us are mammals, and social mammals at that. Surely, there is enough biological overlap so extrapolations are warranted, even if partially. Nevertheless, it’s a good start for those who want to know a bit about the changes motherhood does to the brain, behavior, thoughts, and feelings.

Corroborated with what I already know about the neuroscience of maternity, my favourite takeaway is this: new moms are not crazy. They can’t help most of these changes. It’s biology, you see. So go easy on new moms. Moms, also go easy on yourselves and know that, whether they want to share or not, the other moms probably go through the same stuff. You’re not alone. And if that overactive threat circuit gives you problems, i.e. you feel overwhelmed, it’s OK to ask for help. And if you don’t get it, ask for it again and again until you do. That takes courage, that’s empowerment.

P. S. The paper doesn’t look like it’s peer-reviewed. Yes, I know the peer-reviewing publication system is flawed, I’ve been on the receiving end of it myself, but it’s been drilled into my skull that it’s important, flawed as it is, so I thought to mention it.

REFERENCE: Kim, P. (Sept. 2016). Human Maternal Brain Plasticity: Adaptation to Parenting, New Directions for Child and Adolescent Development, (153): 47–58. PMCID: PMC5667351, doi: 10.1002/cad.20168. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 28 September 2018

The FIRSTS: The cause(s) of dinosaur extinction

A few days ago, a follower of mine gave me an interesting read from The Atlantic regarding the dinosaur extinction. Like many of my generation, I was taught in school that dinosaurs died because an asteroid hit the Earth. That led to a nuclear winter (or a few years of ‘nuclear winters’) which killed the photosynthetic organisms, and then the herbivores didn’t have anything to eat so they died and then the carnivores didn’t have anything to eat and so they died. Or, as my 4-year-old puts it, “[in a solemn voice] after the asteroid hit, big dusty clouds blocked the sun; [in an ominous voice] each day was colder than the previous one and so, without sunlight to keep them alive [sad face, head cocked sideways], the poor dinosaurs could no longer survive [hands spread sideways, hung head] “. Yes, I am a proud parent. Now I have to do a sit-down with the child and explain that… What, exactly?

Well, The Atlantic article showcases the struggles of a scientist – paleontologist and geologist Gerta Keller – who doesn’t believe the mainstream asteroid hypothesis; rather she thinks there is enough evidence to point out that extreme volcano eruptions, like really extreme, thousands of times more powerful than anything we know in the recorded history, put out so much poison (soot, dust, hydrofluoric acid, sulfur, carbon dioxide, mercury, lead, and so on) in the atmosphere that, combined with the consequent dramatic climate change, killed the dinosaurs. The volcanoes were located in India and they erupted for hundreds of thousands of years, but most violent eruptions, Keller thinks, were in the last 40,000 years before the extinction. This hypothesis is called the Deccan volcanism from the region in India where these nasty volcanoes are located, first proposed by Vogt (1972) and Courtillot et al. (1986).

138- Vogt - Copy.jpg

So which is true? Or, rather, because this is science we’re talking about, which hypothesis is more supported by the facts: the volcanism or the impact?

The impact hypothesis was put forward in 1980 when Walter Alvarez, a geologist, noticed a thin layer of clay in rocks that were about 65 million years old, which coincided with the time when the dinosaurs disappeared. This layer is on the KT boundary (sometimes called K-T, K-Pg, or KPB, looks like the biologists are not the only ones with acronym problems) and marks the boundary between the Cretaceous and Paleogenic geological periods (T is for Triassic, yeah, I know). Walter asked his father, the famous Nobel Prize physicist Louis Alvarez, to take a look at it and see what it is. Alvarez Sr. analyzed it and decided that the clay contains a lot of iridium, dozens of times more than expected. After gathering more samples from Europe and New Zealand, they published a paper (Alvarez et al., 1980) in which the scientists reasoned that because Earth’s iridium is deeply buried in its bowels and not in its crust, this iridium at the K-Pg boundary is of extraterrestrial origin, which could be brought here only by an asteroid/comet. This is also the paper in which it was put forth for the first time the conjecture that the asteroid impact killed the dinosaurs, based on the uncanny coincidence of timing.

138-alvarez - Copy

The discovery of the Chicxulub crater in Mexico followed a more sinuous path because the geophysicists who first discovered it in the ’70s were working for an oil company, looking for places to drill. Once the dinosaur-died-due-to-asteroid-impact hypothesis gained popularity outside academia, the geologists and the physicists put two-and-two together, acquired more data, and published a paper (Hildebrand et al., 1991) where the Chicxulub crater was for the first time linked with the dinosaur extinction. Although the crater was not radiologically dated yet, they had enough geophysical, stratigraphic, and petrologic evidence to believe it was as old as the iridium layer and the dinosaur die-out.

138-chicxulub - Copy

But, devil is in the details, as they say. Keller published a paper in 2007 saying the Chicxulub event predates the extinction by some 300,000 years (Keller et al., 2007). She looked at geological samples from Texas and found the glass granule layer (indicator of the Chicxhulub impact) way below the K-Pg boundary. So what’s up with the iridium then? Keller (2014) believes that is not of extraterrestrial origin and it might well have been spewed up by a particularly nasty eruption or the sediments got shifted. Schulte et al. (2010), on the other hand, found high levels of iridium in 85 samples from all over the world in the KPG layer. Keller says that some other 260 samples don’t have iridium anomalies. As a response, Esmeray-Senlet et al. (2017) used some fancy Mass Spectrometry to show that the iridium profiles could have come only from Chicxulub, at least in North America. They argue that the variability in iridium profiles around the world is due to regional geochemical processes. And so on, and so on, the controversy continues.

Actual radioisotope dating was done a bit later in 2013: date of K-Pg is 66.043 ± 0.043 MA (millions of years ago), date of the Chicxulub crater is 66.038 ±.025/0.049 MA. Which means that the researchers “established synchrony between the Cretaceous-Paleogene boundary and associated mass extinctions with the Chicxulub bolide impact to within 32,000 years” (Renne et al., 2013), which is a blink of an eye in geological times.

138-66 chixhulub - Copy

Now I want you to understand that often in science, though by far not always, matters are not so simple as she is wrong, he is right. In geology, what matters most is the sample. If the sample is corrupted, so will be your conclusions. Maybe Keller’s or Renne’s samples were affected by a myriad possible variables, some as simple as shifting the dirt from here to there by who knows what event. After all, it’s been 66 million years since. Also, methods used are just as important and dating something that happened so long ago is extremely difficult due to intrinsic physical methodological limitations. Keller (2014), for example, claims that Renne couldn’t have possibly gotten such an exact estimation because he used Argon isotopes when only U-Pb isotope dilution–thermal ionization mass spectrometry (ID-TIMS) zircon geochronology could be so accurate. But yet again, it looks like he did use both, so… I dunno. As the over-used always-trite but nevertheless extremely important saying goes: more data is needed.

Even if the dating puts Chicxulub at the KPB, the volcanologists say that the asteroid, by itself, couldn’t have produced a mass extinction because there are other impacts of its size and they did not have such dire effects, but were barely noticeable at the biota scale. Besides, most of the other mass extinctions on the planet have been already associated with extreme volcanism (Archibald et al., 2010). On the other hand, the circumstances of this particular asteroid could have made it deadly: it landed in the hydrocarbon-rich areas that occupied only 13% of the Earth’s surface at the time which resulted in a lot of “stratospheric soot and sulfate aerosols and causing extreme global cooling and drought” (Kaiho & Oshima, 2017). Food for thought: this means that the chances of us, humans, to be here today are 13%!…

I hope that you do notice that these are very recent papers, so the issue is hotly debated as we speak.

It is possible, nay probable, that the Deccan volcanism, which was going on long before and after the extinction, was exacerbated by the impact. This is exactly what Renne’s team postulated in 2015 after dating the lava plains in the Deccan Traps: the eruptions intensified about 50,000 years before the KT boundary, from “high-frequency, low-volume eruptions to low-frequency, high-volume eruptions”, which is about when the asteroid hit. Also, the Deccan eruptions continued for about half a million years after KPB, “which is comparable with the time lag between the KPB and the initial stage of ecological recovery in marine ecosystems” (Renne et al., 2016, p. 78).

Since we cannot get much more accurate dating than we already have, perhaps the fossils can tell us whether the dinosaurs died abruptly or slowly. Because if they got extinct in a few years instead of over 50,000 years, that would point to a cataclysmic event. Yes, but which one, big asteroid or violent volcano? Aaaand, we’re back to square one.

Actually, the last papers on the matter points to two extinctions: the Deccan extinction and the Chicxulub extinction. Petersen et al., (2016) went all the way to Antarctica to find pristine samples. They noticed a sharp increase in global temperatures by about 7.8 ºC at the onset of Deccan volcanism. This climate change would surely lead to some extinctions, and this is exactly what they found: out of 24 species of marine animals investigated, 10 died-out at the onset of Deccan volcanism and the remaining 14 died-out when Chicxulub hit.

In conclusion, because this post is already verrrry long and is becoming a proper college review, to me, a not-a-geologist/paleontologist/physicist-but-still-a-scientist, things happened thusly: first Deccan traps erupted and that lead to a dramatic global warming coupled with spewing poison in the atmosphere. Which resulted in a massive die-out (about 200,000 years before the bolide impact, says a corroborating paper, Tobin, 2017). The surviving species (maybe half or more of the biota?) continued the best they could for the next few hundred thousand years in the hostile environment. Then the Chicxulub meteorite hit and the resulting megatsunami, the cloud of super-heated dust and soot, colossal wildfires and earthquakes, acid rain and climate cooling, not to mention the intensification of the Deccan traps eruptions, finished off the surviving species. It took Earth 300,000 to 500,000 years to recover its ecosystem. “This sequence of events may have combined into a ‘one-two punch’ that produced one of the largest mass extinctions in Earth history” (Petersen et al., 2016, p. 6).

138-timeline dinosaur - Copy

By Neuronicus, 25 August 2018

P. S. You, high school and college students who will use this for some class assignment or other, give credit thusly: Neuronicus (Aug. 26, 2018). The FIRSTS: The cause(s) of dinosaur extinction. Retrieved from https://scientiaportal.wordpress.com/2018/08/26/the-firsts-the-causes-of-dinosaur-extinction/ on [date]. AND READ THE ORIGINAL PAPERS. Ask me for .pdfs if you don’t have access, although with sci-hub and all… not that I endorse any illegal and fraudulent use of the above mentioned server for the purpose of self-education and enlightenment in the quest for knowledge that all academics and scientists praise everywhere around the Globe!

EDIT March 29, 2019. Astounding one-of-a-kind discovery is being brought to print soon. It’s about a site in North Dakota that, reportedly, has preserved the day of the Chicxhulub impact in amazing detail, with tons of fossils of all kinds (flora, mammals, dinosaurs, fish) which seems to put the entire extinction of dinosaurs in one day, thus favoring the asteroid impact hypothesis. The data is not out yet. Can’t wait til it is! Actually, I’ll have to wait some more after it’s out for the experts to examine it and then I’ll find out. Until then, check the story of the discovery here and here.

REFERENCES:

1. Alvarez LW, Alvarez W, Asaro F, & Michel HV (6 Jun 1980). Extraterrestrial cause for the cretaceous-tertiary extinction. PMID: 17783054. DOI: 10.1126/science.208.4448.1095 Science, 208(4448):1095-1108. ABSTRACT | FULLTEXT PDF

2. Archibald JD, Clemens WA, Padian K, Rowe T, Macleod N, Barrett PM, Gale A, Holroyd P, Sues HD, Arens NC, Horner JR, Wilson GP, Goodwin MB, Brochu CA, Lofgren DL, Hurlbert SH, Hartman JH, Eberth DA, Wignall PB, Currie PJ, Weil A, Prasad GV, Dingus L, Courtillot V, Milner A, Milner A, Bajpai S, Ward DJ, Sahni A. (21 May 2010) Cretaceous extinctions: multiple causes. Science,328(5981):973; author reply 975-6. PMID: 20489004, DOI: 10.1126/science.328.5981.973-aScience. FULL REPLY

3. Courtillot V, Besse J, Vandamme D, Montigny R, Jaeger J-J, & Cappetta H (1986). Deccan flood basalts at the Cretaceous/Tertiary boundary? Earth and Planetary Science Letters, 80(3-4), 361–374. doi: 10.1016/0012-821x(86)90118-4. ABSTRACT

4. Esmeray-Senlet, S., Miller, K. G., Sherrell, R. M., Senlet, T., Vellekoop, J., & Brinkhuis, H. (2017). Iridium profiles and delivery across the Cretaceous/Paleogene boundary. Earth and Planetary Science Letters, 457, 117–126. doi:10.1016/j.epsl.2016.10.010. ABSTRACT

5. Hildebrand AR, Penfield GT, Kring DA, Pilkington M, Camargo AZ, Jacobsen SB, & Boynton WV (1 Sept. 1991). Chicxulub Crater: A possible Cretaceous/Tertiary boundary impact crater on the Yucatán Peninsula, Mexico. Geology, 19 (9): 867-871. DOI: https://doi.org/10.1130/0091-7613(1991)019<0867:CCAPCT>2.3.CO;2. ABSTRACT

6. Kaiho K & Oshima N (9 Nov 2017). Site of asteroid impact changed the history of life on Earth: the low probability of mass extinction. Scientific Reports,7(1):14855. PMID: 29123110, PMCID: PMC5680197, DOI:10.1038/s41598-017-14199-x. . ARTICLE | FREE FULLTEXT PDF

7. Keller G, Adatte T, Berner Z, Harting M, Baum G, Prauss M, Tantawy A, Stueben D (30 Mar 2007). Chicxulub impact predates K–T boundary: New evidence from Brazos, Texas, Earth and Planetary Science Letters, 255(3–4): 339-356. DOI: 10.1016/j.epsl.2006.12.026. ABSTRACT

8. Keller, G. (2014). Deccan volcanism, the Chicxulub impact, and the end-Cretaceous mass extinction: Coincidence? Cause and effect? Geological Society of America Special Papers, 505:57–89. doi:10.1130/2014.2505(03) ABSTRACT

9. Petersen SV, Dutton A, & Lohmann KC. (5 Jul 2016). End-Cretaceous extinction in Antarctica linked to both Deccan volcanism and meteorite impact via climate change. Nature Communications, 7:12079. doi: 10.1038/ncomms12079. PMID: 27377632, PMCID: PMC4935969, DOI: 10.1038/ncomms12079. ARTICLE | FREE FULLTEXT PDF 

10. Renne PR, Deino AL, Hilgen FJ, Kuiper KF, Mark DF, Mitchell WS 3rd, Morgan LE, Mundil R, & Smit J (8 Feb 2013). Time scales of critical events around the Cretaceous-Paleogene boundary. Science, 8;339(6120):684-687. doi: 10.1126/science.1230492. PMID: 23393261, DOI: 10.1126/science.1230492 ABSTRACT 

11. Renne PR, Sprain CJ, Richards MA, Self S, Vanderkluysen L, Pande K. (2 Oct 2015). State shift in Deccan volcanism at the Cretaceous-Paleogene boundary, possibly induced by impact. Science, 350(6256):76-8. PMID: 26430116. DOI: 10.1126/science.aac7549 ABSTRACT

12. Schoene B, Samperton KM, Eddy MP, Keller G, Adatte T, Bowring SA, Khadri SFR, & Gertsch B (2014). U-Pb geochronology of the Deccan Traps and relation to the end-Cretaceous mass extinction. Science, 347(6218), 182–184. doi:10.1126/science.aaa0118. ARTICLE

13. Schulte P, Alegret L, Arenillas I, Arz JA, Barton PJ, Bown PR, Bralower TJ, Christeson GL, Claeys P, Cockell CS, Collins GS, Deutsch A, Goldin TJ, Goto K, Grajales-Nishimura JM, Grieve RA, Gulick SP, Johnson KR, Kiessling W, Koeberl C, Kring DA, MacLeod KG, Matsui T, Melosh J, Montanari A, Morgan JV, Neal CR, Nichols DJ, Norris RD, Pierazzo E,Ravizza G, Rebolledo-Vieyra M, Reimold WU, Robin E, Salge T, Speijer RP, Sweet AR, Urrutia-Fucugauchi J, Vajda V, Whalen MT, Willumsen PS.(5 Mar 2010). The Chicxulub asteroid impact and mass extinction at the Cretaceous-Paleogene boundary. Science, 327(5970):1214-8. PMID: 20203042, DOI: 10.1126/science.1177265. ABSTRACT

14. Tobin TS (24 Nov 2017). Recognition of a likely two phased extinction at the K-Pg boundary in Antarctica. Scientific Reports, 7(1):16317. PMID: 29176556, PMCID: PMC5701184, DOI: 10.1038/s41598-017-16515-x. ARTICLE | FREE FULLTEXT PDF 

15. Vogt, PR (8 Dec 1972). Evidence for Global Synchronism in Mantle Plume Convection and Possible Significance for Geology. Nature, 240(5380), 338–342. doi:10.1038/240338a0 ABSTRACT

Is piracy the same as stealing?

Exactly 317 years ago, Captain William Kidd was tried and executed for piracy. Whether or not he was a pirate is debatable but what is not under dispute is that people do like to pirate. Throughout the human history, whenever there was opportunity, there was also theft. Wait…, is theft the same as piracy?

If we talk about Captain “Arrr… me mateys” sailing the high seas under the “Jolly Roger” flag, there is no legal or ethical dispute that piracy is equivalent with theft. But what about today’s digital piracy? Despite what the grieved parties may vociferously advocate, digital piracy is not theft because what is being stolen is a copy of the goodie, not the goodie itself therefore it is an infringement and not an actual theft. That’s from a legal standpoint. Ethically though…

For Eres et al. (2016), theft is theft, whether the object of thievery is tangible or not. So why are people who have no problem pirating information from the internet squeamish when it comes to shoplifting the same item?

First, is it true that people are more likely to steal intangible things than physical objects? A questionnaire involving 127 young adults revealed that yes, people of both genders are more likely to steal intangible items, regardless if they (the items) are cheap or expensive or the company that owned the item is big or small. Older people were less likely to pirate and those who already pirated were more likely to do so in the future.

136 piracy - Copy

In a different experiment, Eres et al. (2016) stuck 35 people in the fMRI and asked them to imagine the tangibility (e.g., CD, Book) or intangibility (e.g., .pdf, .avi) of some items (e.g., book, music, movie, software). Then they asked the participants how they would feel after they would steal or purchase these items.

People were inclined to feel more guilty if the item was illegally obtained, particularly if the object was tangible, proving that, at least from an emotional point of view, stealing and infringement are two different things. An increase in the activation the left lateral orbitofrontal cortex (OFC) was seen when the illegally obtained item was tangible. Lateral OFC is a brain area known for its involvement in evaluating the nature of punishment and displeasurable information. The more sensitive to punishment a person is, the more likely it is to be morally sensitive as well.

Or, as the authors put it, it is more difficult to imagine intangible things vs. physical objects and that “difficulty in representing intangible items leads to less moral sensitivity when stealing these items” (p. 374). Physical items are, well…, more physical, hence, possibly, demanding a more immediate attention, at least evolutionarily speaking.

(Divergent thought. Some studies found that religious people are less socially moral than non-religious. Could that be because for the religious the punishment for a social transgression is non-existent if they repent enough whereas for the non-religious the punishment is immediate and factual?)

136 ofc - Copy

Like most social neuroscience imaging studies, this one lacks ecological validity (i.e., people imagined stealing, they did not actually steal), a lacuna that the authors are gracious enough to admit. Another drawback of imaging studies is the small sample size, which is to blame, the authors believe, for failing to see a correlation between the guilt score and brain activation, which other studies apparently have shown.

A simple, interesting paper providing food for thought not only for the psychologists, but for the law makers and philosophers as well. I do not believe that stealing and infringement are the same. Legally they are not, now we know that emotionally they are not either, so shouldn’t they also be separated morally?

And if so, should we punish people more or less for stealing intangible things? Intuitively, because I too have a left OFC that’s less active when talking about transgressing social norms involving intangible things, I think that punishment for copyright infringement should be less than that for stealing physical objects of equivalent value.

But value…, well, that’s where it gets complicated, isn’t it? Because just as intangible as an .mp3 is the dignity of a fellow human, par example. What price should we put on that? What punishment should we deliver to those robbing human dignity with impunity?

Ah, intangibility… it gets you coming and going.

I got on this thieving intangibles dilemma because I’m re-re-re-re-re-reading Feet of Clay, a Discworld novel by Terry Pratchett and this quote from it stuck in my mind:

“Vimes reached behind the desk and picked up a faded copy of Twurp’s Peerage or, as he personally thought of it, the guide to the criminal classes. You wouldn’t find slum dwellers in these pages, but you would find their landlords. And, while it was regarded as pretty good evidence of criminality to be living in a slum, for some reason owning a whole street of them merely got you invited to the very best social occasions.”

REFERENCE: Eres R, Louis WR, & Molenberghs P (Epub 8 May 2016, Pub Aug 2017). Why do people pirate? A neuroimaging investigation. Social Neuroscience, 12(4):366-378. PMID: 27156807, DOI: 10.1080/17470919.2016.1179671. ARTICLE 

By Neuronicus, 23 May 2018

Arnica and a scientist’s frustrations

angry-1372523 - CopyWhen you’re the only scientist in the family you get asked the weirdest things. Actually, I’m not the only one, but the other one is a chemist and he’s mostly asked about astrophysics stuff, so he doesn’t really count, because I am the one who gets asked about rare diseases and medication side-effects and food advice. Never mind that I am a neuroscientist and I have professed repeatedly and quite loudly my minimum knowledge of everything from the neck down, all eyes turn to me when the new arthritis medication or the unexpected side-effects of that heart drug are being brought up. But, curiously, if I dare speak about brain stuff I get the looks that a thing the cat just dragged in gets. I guess everybody is an expert on how the brain works on account of having and using one, apparently. Everybody, but the actual neuroscience expert whose input on brain and behavior is to be tolerated and taken with a grain of salt at best, but whose opinion on stomach distress is of the utmost importance and must be listened to reverentially in utter silence [eyes roll].

So this is the background on which the following question was sprung on me: “Is arnica good for eczema?”. As always, being caught unawares by the sheer diversity of interests and afflictions my family and friends can have, I mumbled something about I don’t know what arnica is and said I will look it up.

This is an account of how I looked it up and what conclusions I arrived to or how a scientist tries to figure out something completely out of his or her field. First thing I did was to go on Wikipedia. Hold your horses, it was not about scientific information but for a first clarification step: is it a chemical, a drug, an insect, a plant maybe? I used to encourage my students to also use Wikipedia when they don’t have a clue what a word/concept/thing is. Kind of like a dictionary or a paper encyclopedia, if you will. To have a starting point. As a matter of fact Wikipedia is an online encyclopedia, right? Anyway, I found out that Arnica is a plant genus out of which one species, Arnica Montana, seems to be popular.

Then I went to the library. Luckily for me, the library can be accessed online from the comfort of my home and in my favorite pajamas in the incarnation of PubMed or Medline as it used to be affectionately called. It is the US National Library of Medicine maintained by the National Institutes of Health, a wonderful repository of scholarly papers (yeah, Google Scholar to PubMed is like the babbling of a two-year old to the Shakespearian sonnets; Google also has an agenda, which you won’t find on PubMed). Useful tip: when you look for a paper that is behind a paywall in Nature or Elsevier Journals or elsewhere, check the PubMed too because very few people seem to know that there is an obscure and incredibly helpful law saying that research paid by the US taxpayers should be available to the US taxpayer. A very sensible law passed only a few years ago that has the delightful effect of having FREE full text access to papers after a certain amount of months from publishing (look for the PMC icon in the upper right corner).

I searched for “arnica” and got almost 400 results. I sorted by “most recent”. The third hit was a review. I skimmed it and seemed to talk a lot about healing in homeopathy, at which point, naturally, I got a gloomy foreboding. But I persevered because one data point does not a trend make. Meaning that you need more than a paper – or a handful – to form an informed opinion. This line of thinking has been rewarded by the hit No. 14 in the search which had an interesting title in the sense that it was the first to hint to a mechanism through which this plant was having some effects. Mechanisms are important, they allow you to differentiate speculation from findings, so I always prefer papers that try to answer a “How?” question as opposed to the other kinds; whys are almost always speculative as they have a whiff of post factum rationalizations, whats are curious observations but, more often than not, a myriad factors can account for them, whens are an interesting hybrid between the whats and the hows – all interesting reads but for different purposes. Here is a hint: you want to publish in Nature or Science? Design an experiment that answers all the questions. Gone are the days when answering one question was enough to publish…

Digressions aside, the paper I am covering today sounds like a mechanism paper. Marzotto et al. (2016) cultured a particular line of human cells in a Petri dish destined to test the healing powers of Arnica montana. The experimental design seems simple enough: the control culture gets nothing and the experimental culture gets Arnica montana. Then, the authors check to see if there are differences in gene expressions between the two groups.

The authors applied different doses of Arnica montana to the cultures to see if the effects are dose-dependant. The doses used were… wait, bear with me, I’m not familiar with the system, it’s not metric. In the Methods, the authors say

Arnica m. was produced by Boiron Laboratoires (Lyon, France) according to the French Homeopathic pharmacopoeia and provided as a first centesimal dilution (Arnica m. 1c) of the hydroalcoholic extract (Mother Tincture, MT) in 30% ethanol/distilled water”.

Wait, what?! Centesimal… centesimal… wasn’t that the nothing-in-it scale from the pseudoscientific bull called homeopathy? Maybe I’m wrong, maybe there are some other uses for it and becomes clear later:

Arnica m. 1c was used to prepare the second centesimal dilution (Arnica m. 2c) by adding 50μl of 1c solution to 4.95ml of distilled ultra-pure water. Therefore, 2c corresponds to 10−4 of the MT”.

Holy Mother of God, this is worse than gibberish; this is voluntary misdirection, crap wrapped up in glitter, medieval tinkering sold as state-of-the-art 21st century science. Speaking of state-of-the-art, the authors submit their “doses” to a liquid chromatograph, a thin layer chromatograph, a double-beam spectrophotometer, a nanoparticle tracking analysis (?!) for what purposes I cannot fathom. On, no, I can: to sound science-y. To give credibility for the incredulous. To make money.

At which point I stopped reading the ridiculous nonsense and took a closer look at the authors and got hit with this:

“Competing Interests: The authors have declared that no competing interests exist. This study was funded by Boiron Laboratoires Lyon with a research agreement in partnership with University of Verona. There are no patents, products in development or marketed products to declare. This does not alter our adherence to all the PLOS ONE policies on sharing data and materials, as detailed online in the guide for authors.”

No competing interests?? The biggest manufacturer of homeopathic crap in the world pays you to see if their product works and you have no competing interest? Maybe no other competing interests. There were some comments and replies to this paper after that, but it is all inconsequential because once you have faulty methods your results are irrelevant. Besides, the comments are from the same University, could be some internal feuding.

PLoS One, what have you done? You’re a peer-reviewed open access journal! What “peers” reviewed this paper and gave their ok for publication? Since when is homeopathy science?! What am I going to find that you publish next? Astrology? For shame… Give me that editor’s job because I am certain I can do better.

To wrap it up and tell you why I am so mad. The homeopathic scale system, that centesimal gibberish, is just that: gibberish. It is impossible to replicate this experiment without the product marketed by Boiron because nobody knows how much of the plant is in the dose, which parts of the plant, what kind of extract, or what concentration. So it’s like me handing you my special potion and telling you it makes warts disappear because it has parsley in it. But I don’t tell you my recipe, how much, if there anything else besides parsley in it, if I used the roots or only the leaves or anything. Now that, my friends, it’s not science, because science is REPLICABLE. Make no mistake: homeopathy is not science. Just like the rest of alternative medicine, homeopathy is a ruthless and dangerous business that is in sore need of lawmakers’ attention, like FDA or USDA. And for those who think this is a small paper, totally harmless, no impact, let me tell you that this paper had over 20,000 views (real science papers get hundreds, maybe thousands).

I would have oh so much more to rant on. But enough. Rant over.

Oh, not yet. Lastly, I checked a few other papers about arnica and my answer to the eczema question is: “It’s possible but no, I don’t think so. I don’t know really, I couldn’t find any serious study about it and I gave up looking after I found a lot of homeopathic red flags”. The answer I will give my family member? “Not the product you have, no. Go to the doctors, the ones with MDs after their name and do what they tell you. In addition, I, the one with a PhD after my name, will tell you this for free because you’re family: rub the contents of this bottle only once a day – no more! – on the affected area and you will start seeing improvements in three days. Do not use elsewhere, it’s quite potent!” Because placebo works and at least my water vial is poison free.

117 - Copy

Reference: Marzotto M, Bonafini C, Olioso D, Baruzzi A, Bettinetti L, Di Leva F, Galbiati E, & Bellavite P (10 Nov 2016). Arnica montana Stimulates Extracellular Matrix Gene Expression in a Macrophage Cell Line Differentiated to Wound-Healing Phenotype. PLoS One, 11(11):e0166340. PMID: 27832158, PMCID: PMC5104438, DOI: 10.1371/journal.pone.0166340. ABSTRACT | FREE FULLTEXT PDF 

By Neuronicus, 10 June 2017

Save

100% Effective Vaccine

A few days ago I was reading random stuff on the internet, as is one’s procrastination proclivity, catching up after the holiday, and I exclaimed out loud: “They discovered an 100% effective Ebola Vaccine!”. I expected some ‘yeay’-s or at least some grunts along the lines of ‘that’s nice’ or ‘cool’. Naturally, I turned around from my computer to check the source of unaccustomed silence to the announcement of such good news or, at least, to make sure that everybody is still breathing and present in the room. What met my worried glare was a gloom face and a shaking head. That’s because news like that are misleading, because, duh, it finally dawned on me, there is no such thing as ‘100% effective vaccine’.

And yet…, and yet this is exactly what Henao-Restrepo et al. (2016) say they found! The study is huge, employing more that 10 000 people. Such a tremendous endeavor has been financed by WHO (World Health Organization) and various departments from several countries (UK, USA, Switzerland, South Africa, Belgium, Germany, France, Guinea, and Norway) and, I’m assuming, a lot of paid and unpaid volunteers. I cannot even imagine the amount of work and the number of people that made this happen. And the coordination required for such speedy results!

The successful vaccine in rodents and non-human primates, called the recombinant, replication-competent, vesicular stomatitis virus-based vaccine expressing the glycoprotein of a Zaire Ebolavirus (rVSV-ZEBOV) has been taken to the Republic of Guinea and rapidly administered to volunteers who were in contact with somebody that had Ebola symptoms. And their contacts. I mean the contacts and the contacts of contacts of the Ebola patient. Who were contacted by the researchers within 2 days of a new Ebola case based on the patient’s list of contacts. And of contacts of contacts. Is not that complicated, honest.

After vaccinations, the “vaccinees were observed for 30 min post-vaccination and at home visits on days 3, 14, 21, 42, 63, and 84” (p.4). Some volunteers received the vaccine immediately, others after 3 weeks. No one who received the vaccine immediately developed Ebola, which lead the researchers to claim that the vaccine is 100% effective. Only 9 from the delayed vaccination group developed Ebola within 10 days of vaccination, but the researchers figured that these people probably contacted Ebola prior to the vaccination, since the disease requires typically about 10 days to show its ugly  horns.

So this is great news. Absolutely great. Even if, as always, I could nitpick thorough the paper, squabble over the “typically” 10-day incubation period, and cock an eyebrow at the new-fangled ring vaccination design as opposed to the old-fashioned placebo approach. Even after these minor criticisms this is – I repeat – GREAT NEWS!

P.S. Don’t ever say that the UN didn’t do anything for you.

111-copy

Reference: Henao-Restrepo AM, Camacho A, Longini IM, Watson CH, Edmunds WJ, Egger M, Carroll MW, Dean NE, Diatta I, Doumbia M, Draguez B, Duraffour S, Enwere G, Grais R, Gunther S, Gsell PS, Hossmann S, Watle SV, Kondé MK, Kéïta S, Kone S, Kuisma E, Levine MM, Mandal S, Mauget T, Norheim G, Riveros X, Soumah A, Trelle S, Vicari AS, Røttingen JA, Kieny MP. (22 Dec 2016). Efficacy and effectiveness of an rVSV-vectored vaccine in preventing Ebola virus disease: final results from the Guinea ring vaccination, open-label, cluster-randomised trial (Ebola Ça Suffit!). Lancet. pii: S0140-6736(16)32621-6. doi: 10.1016/S0140-6736(16)32621-6. PMID: 28017403 [Epub ahead of print] ARTICLE | FREE FULLTEXT PDF | Good Nitpicking in The Conversation

By Neuronicus, 18 January 2017

Save

Save

Don’t eat snow

Whoever didn’t roll out a tongue to catch a few snowflakes? Probably only those who never encountered snow.

The bad news is that snow, particularly urban snow is bad, really bad for you. The good news is that this was not always the case. So there is hope that in the far future it will be pristine again.

Nazarenko et al. (2016) constructed a very clever contraption that reminds me of NASA space exploration instruments. The authors refer to this by the humble name of ‘environmental chamber’, but is in fact a complex construction with different modules designed to measure out how car exhaust and snow interact (see Fig. 1).

110-copy-2
Fig. 1 from Nazarenko et al. (2016, DOI: 10.1039/c5em00616c). Released under CC BY-NC 3.0.

After many experiments, researchers concluded that snow absorbs pollutants very effectively. Among the many kinds of organic compounds soaked by snow in just one hour after exposure to fume exhaust, there were the infamous BTEX (benzene, toluene, ethylbenzene, and xylenes). The amounts of these chemicals in the snow were not at all negligible; to give you an example, the BTEX concentration increased from virtually 0 to 50 and up to 380 ug kg-1. The authors provide detailed measurements for all the 40+ compounds they have identified.

Needles to say, many these compounds are known carcinogenics. Snow absorbs them, alters their size distributions, and then it melts… Some of them may be released back in the air as they are volatile, some will go in the ground and rivers as polluted water. After this gloomy reality check, I’ll leave you with the words of the researchers:

“The accumulation and transfer of pollutants from exhaust – to snow – to meltwater need to be considered by regulators and policy makers as an important area of focus for mitigation with the aim to protect public health and the environment” (p. 197).

110-copy

Reference: Nazarenko Y, Kurien U, Nepotchatykh O, Rangel-Alvarado RB, & Ariya PA. (Feb 2016). Role of snow and cold environment in the fate and effects of nanoparticles and select organic pollutants from gasoline engine exhaust. Environmental Science: Processes & Impacts, 18(2):190-199. doi: 10.1039/c5em00616c. ARTICLE | FREE FULTEXT PDF 

By Neuronicus, 26 December 2016

Save

Save

Soccer and brain jiggling

There is no news or surprise that strong hits to the head produce transient or permanent brain damage. But how about mild hits produced by light objects like, say, a volley ball or soccer ball?

During a game of soccer, a player is allowed to touch the ball with any part of his/her body minus the hands. Therefore, hitting the ball with the head, a.k.a. soccer heading, is a legal move and goals marked through such a move are thought to be most spectacular by the refined connoisseur.

A year back, in 2015, the United States Soccer Federation forbade the heading of the ball by children 10 years old and younger after a class-action lawsuit against them. There has been some data that soccer players display loss of brain matter that is associated with cognitive impairment, but such studies were correlational in nature.

Now, Di Virgilio et al. (2016) conducted a study designed to explore the consequences of soccer heading in more detail. They recruited 19 young amateur soccer players, mostly male, who were instructed to perform 20 rotational headings as if responding to corner kicks in a game. The ball was delivered by a machine at a speed of approximately 38 kph. The mean force of impact for the group was 13.1 ± 1.9 g. Immediately after the heading session and at 24 h, 48 h and 2 weeks post-heading, the authors performed a series of tests, among which are a transcranial magnetic stimulation (TMS) recording, a cognitive function assessment (by using the Cambridge Neuropsychological Test Automated Battery), and a postural control test.

Not being a TMS expert myself, I was wondering how do you record with a stimulator? TMS stimulates, it doesn’t measure anything. Or so I thought. The authors delivered brief  (1 ms) stimulating impulses to the brain area that controls the leg (primary motor cortex). Then they placed an electrode over the said muscle (rectus femoris or quadriceps femoris) and recorded how the muscle responded. Pretty neat. Moreover, the authors believe that they can make inferences about levels of inhibitory chemicals in the brain from the way the muscle responds. Namely, if the muscle is sluggish in responding to stimulation, then the brain released an inhibitory chemical, like GABA (gamma-amino butyric acid), hence calling this process corticomotor inhibition. Personally, I find this GABA inference a bit of a leap of faith, but, like I said, I am not fully versed in TMS studies so it may be well documented. Whether or not GABA is responsible for the muscle sluggishness, one thing is well documented though: this sluggishness is the most consistent finding in concussions.

The subjects had impaired short term and long term memory functions immediately after the ball heading, but not 24 h or more later. Also transient was the corticomotor inhibition. In other words, soccer ball heading results in measurable changes in brain function. Changes for the worst.

Even if these changes are transient, there is no knowing (as of yet) what prolonged ball heading might do. There is ample evidence that successive concussions have devastating effects on the brain. Granted, soccer heading does not produce concussions, at least in this paper’s setting, but I cannot think that even sub-concussion intensity brain disruption can be good for you.

On a lighter note, although the title of the paper features the word “soccer”, the rest o the paper refers to the game as “football”. I’ll let you guess the authors’ nationality or at least the continent of provenance ;).

109-football-copy

Reference: Di Virgilio TG, Hunter A, Wilson L, Stewart W, Goodall S, Howatson G, Donaldson DI, & Ietswaart M. (Nov 2016, Epub 23 Oct 2016). Evidence for Acute Electrophysiological and Cognitive Changes Following Routine Soccer Heading. EBioMedicine, 13:66-71. PMID: 27789273, DOI: 10.1016/j.ebiom.2016.10.029. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 20 December 2016

Apparently, scientists don’t know the risks & benefits of science

If you want to find out how bleach works or what keeps the airplanes in the air or why is the rainbow the same sequence of colors or if it’s dangerous to let your kid play with snails would you ask a scientist or your local priest?

The answer is very straightforward for most of the people. Just that for a portion of the people the straightforwardness is viewed by the other portion as corkscrewedness. Or rather just plain dumb.

Cacciatore et al. (2016) asked about 5 years ago 2806 American adults how much they trust the information provided by religious organizations, university scientists, industry scientists, and science/technology museums. They also asked them about their age, gender, race, socioeconomic status, income as well as about Facebook use, religiosity, ideology, and attention to science-y content.

Almost 40% of the sample described themselves as Evangelical Christians, one of the largest religious group in USA. These people said they trust more their religious organizations then scientists (regardless of who employs these scientists) to tell the truth about the risks and benefits of technologies and their applications.

The data yielded more information, like the fact that younger, richer, liberal, and white people tended to trust scientists more then their counterparts. Finally, Republicans were more likely to report a religious affiliation than Democrats.

I would have thought that everybody would prefer to take advice about science from a scientist. Wow, what am I saying, I just realized what I typed… Of course people are taking health advice from homeopaths all the time, from politicians rather than environment scientists, from alternative medicine quacks than from doctors, from no-college educated than geneticists. From this perspective then, the results of this study are not surprising, just very very sad… I just didn’t think that the gullible people can also be grouped by political affiliations. I though the affliction is attacking both sides of an ideological isle in a democratic manner.

Of course, this is a survey study, therefore a lot more work is needed to properly generalize these results, from expanding the survey sections (beyond the meager 1 or 2 questions per section) to validation and replication. Possibly, even addressing different aspects of science because, for instance, climate change is a much more touchy subject than, say, apoptosis. And replace or get rid of the “Scientists know best what is good for the public” item; seriously, I don’t know any scientist, including me, who would answer yes to that question. Nevertheless, the trend is, like I said, sad.

107-copy

Reference:  Cacciatore MA, Browning N, Scheufele DA, Brossard D, Xenos MA, & Corley EA. (Epub ahead of print 25 Jul 2016). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science. PMID: 27458117, DOI: 10.1177/0963662516661090. ARTICLE | NPR cover

By Neuronicus, 7 December 2016

Save

Save

Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.

104-copy

unless you’re left-handed, then things are reversed.

References:

1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016

Save

The FIRSTS: The Name of Myelin (1854)

One reason why I don’t post more often is that I have such a hard time deciding what to cover (Hint: send me stuff YOU find awesome). Most of the cool and new stuff is already covered by big platforms with full-time employees and I try to stay away of the media-grabbers. Mostly. Some papers I find so cool that it doesn’t matter that professional science journalists have already covered them and I too jump on the wagon with my meager contribution. Anyway, here is a glimpse on how my train of thought goes on inspiration-less days.

Inner monologue: Check the usual journals’ current issues. Nothing catches my eye. Maybe I’ll feature a historical. Open Wikipedia front page and see what happened today throughout history. Aha, apparently Babinski died in 1932. He’s the one who described the Babinski’s sign. Normally, when the sole of the foot is stroked, the big toe flexes inwards, towards the sole. If it extends upwards, then that’s a sure sign of neurological damage, the Babinski’s sign. But healthy infants can have that sign too not because they have neurological damage, but because their corticospinal neurons are not fully myelinated. Myelin, who discovered that? Probably Schwann. Quick search on PubMed. Too many. Restrict to ‘history”. I hate the search function on PubMed, it brings either to many or no hits, no matter the parameters. Ah, look, Virchow. Interesting. Aha. Find the original reference. Aha. Springer charges 40 bucks for a paper published in 1854?! The hell with that! I’m not even going to check if I have institutional access. Get the pdf from other sources. It’s in German. Bummer. Go to Highwire. Find recent history of myelin. Mielinization? Myelination? Myelinification? All have hits… Get “Fundamental Neuroscience” off of the shelf and check… aha, myelination. Ok. Look at the pretty diagram with the saltatory conduction! Enough! Go back to Virchow. Does it have pictures, maybe I can navigate the legend? Nope. Check if any German speaking friends are online. Nope, they’re probably asleep, which is what I should be doing. Drat. Refine Highwire search. Evrika! “Hystory of Myelin” by Boullerne, 2016. Got the author manuscript. Hurray. Read. Write.

Myelinated fibers, a.k.a. white matter has been observed and described by various anatomists, as early as the 16th century, Boullerne (2016) informs us. But the name of myelin was given only in 1854 by Rudolph Virchow, a physician with a rich academic and public life. Although Virchow introduced the term to distinguish between bone marrow and the medullary substance, paradoxically, he managed to muddy waters even more because he did not restrict the usage of the term mylein to … well, myelin. He used it also to refer to substances in blood cells and egg’s yolk and spleen and, frankly, from the quotes provided in the paper, I cannot make heads or tails of what Virchow thought myelin was. The word myelin comes form the Greek myelos or muelos, which means marrow.

Boullerne (2016) obviously did a lot of research, as the 53-page account is full of quotes from original references. Being such a scholar on the history of myelin I have no choice but to believe her when she says: “In 1868, the neurologist Jean-Martin Charcot (1825-1893) used myelin (myéline) in what can be considered its first correct attribution.”

So even if Virchow coined the term, he was using it incorrectly! Nevertheless, in 1858 he correctly identified the main role of myelin: electrical insulation of the axon. Genial insight for the time.

103-copy

I love historical reviews of sciency stuff. This one is a ‘must-have’ for any biologist or neuroscientist. Chemists and physicists, too, don’t shy away; the paper has something for you too, like myelin’s biochemistry or its birefringence properties.

Reference: Boullerne, AI (Sep 2016, Epub 8 Jun 2016). The history of myelin. Experimental Neurology, 283(Pt B): 431-45. doi: 10.1016/j.expneurol.2016.06.005. ARTICLE

Original Reference: Virchow R. (Dec 1854). Ueber das ausgebreitete Vorkommen einer dem Nervenmark analogen Substanz in den thierischen Geweben. Archiv für pathologische Anatomie und Physiologie und für klinische Medicin, 6(4): 562–572. doi:10.1007/BF02116709. ARTICLE

P.S. I don’t think is right that Springer can retain the copyright for the Virchow paper and charge $39.95 for it. I don’t think they have the copyright for it anyway, despite their claims, because the paper is 162 years old. I am aware of no German or American copyright law that extends for so long. So, if you need it for academic purposes, write to me and thou shall have it.

By Neuronicus, 29 October 2016

Save

Drink before sleep

Among the many humorous sayings, puns, and jokes that one inevitably encounters on any social medium account, one that was popular this year was about the similarity between putting a 2 year old to bed and putting your drunk friend to bed, which went like this: they both sing to themselves, request water, mumble and blabber incoherently, do some weird yoga posses, cry, hiccup, and then they pass out. The joke manages to steal a smile only if someone has been through both situations, otherwise it looses its appeal.

Being exposed to both situations, I thought that while the water request from the drunk friend is a response to the dehydrating effects of alcohol, the water request from the toddler is probably nothing more than a delaying tactic to postpone bedtime. Whether there may or may not be some truth to my assumption in the case of the toddler, here is a paper to show that there is definitely more to the water request than meets the eye.

Generally, thirst is generated by the hypothalamus when its neurons and neurons from organum vasculosum lamina terminalis (OVLT) in the brainstem sense that the blood is either too viscous (hypovolaemia) or too salty (hyperosmolality), both phenomena indicating a need for water. Ingesting water would bring these indices to homeostatic values.

More than a decade ago, researchers observed that rodents get a good gulp of water just before going to sleep. This surge was not motivated by thirst because the mice were not feverish, were not hungry and they did not have a too viscous or a too salty blood. So why do it then? If the rodents are restricted from drinking the water they get dehydrated, so obviously the behavior has function. But is not motivated by thirst, at least not the way we know it. Huh… The authors call this “anticipatory thirst”, because it keeps the animal from becoming dehydrated later on.

Since the behavior occurs with regularity, maybe the neurons that control circadian rhythms have something to do with it. So Gizowski et al. (2016) took a closer look at  the activity of clock neurons from the suprachiasmatic nucleus (SCN), a well known hypothalamic nucleus heavily involved in circadian rhythms. The authors did a lot of work on SCN and OVLT neurons: fluorescent labeling, c-fos expression, anatomical tracing, optogenetics, genetic knockouts, pharmacological manipulations, electrophysiological recordings, and behavioral experiments. All these to come to this conclusion:

SCN neurons release vasopressin and that excites the OVLT neurons via V1a receptors. This is necessary and sufficient to make the animal drink the water, even if it’s not thirsty.

That’s a lot of techniques used in a lot of experiments for only three authors. Ten years ago, you needed only one, maybe two techniques to prove the same point. Either there have been a lot of students and technicians who did not get credit (there isn’t even an Acknowledgements section. EDIT: yes, there is, see the comments below or, if they’re missing, the P.S.) or these three authors are experts in all these techniques. In this day and age, I wouldn’t be surprised by either option. No wonder small universities have difficulty publishing in Big Name journals; they don’t have the resources to compete. And without publishing, no tenure… And without tenure, less research… And thus shall the gap widen.

Musings about workload aside, this is a great paper, shedding light on yet another mysterious behavior and elucidating the mechanism behind it. There’s still work to be done though, like answering how accurate is the SCN in predicting bedtime to activate the drinking behavior. Does it take its cues from light only? Does ambient temperature play a role and so on. This line of work can help people that work in shifts to prevent certain health problems. Their SCN is out of rhythm and that can influence deleteriously the activity of a whole slew of organs.

scn-h2o-copy
Summary of the doi: 10.1038/nature19756 findings. 1) The light is a cue for suprachiasmatic nulceus (SCN) that bedtime is near. 2) The SCN vasopressin neurons that project to organum vasculosum lamina terminalis (OVLT) are activated. 3) The OVLT generates the anticipatory thirst. 4) The animal drinks fluids.

Reference: Gizowski C, Zaelzer C, & Bourque CW (28 Sep 2016). Clock-driven vasopressin neurotransmission mediates anticipatory thirst prior to sleep. Nature, 537(7622): 685-688. PMID: 27680940. DOI: 10.1038/nature19756. ARTICLE

By Neuronicus, 5 October 2016

EDIT (12 Oct 2016): P.S. The blog comments are automatically deleted after a period of time. In case of this post that would be a pity because I have been fortunate to receive comments from at least one of the authors of the paper, the PI, Dr. Charles Bourque and, presumably under pseudonym, but I don’t know that for sure, also the first author, Claire Gizowski. So I will include here, in a post scriptum, the main idea of their comments. Here is an excerpt from Dr. Bourque’s comment:

“Let me state for the record that Claire accomplished pretty much ALL of the work in this paper (there is a description of who did what at the end of the paper). More importantly, there were no “unthanked” undergraduates, volunteers or other parties that contributed to this work.”

My hat, Ms. Gizowski. It is tipped. To you. Congratulations! With such an impressive work I am sure I will hear about you again and that pretty soon I will blog about Dr. Gizowski.

How do you remember?

Memory processes like formation, maintenance and consolidation have been the subjects of extensive research and, as a result, we know quite a bit about them. And just when we thought that we are getting a pretty clear picture of the memory tableau and all that is left is a little bit of dusting around the edges and getting rid of the pink elephant in the middle of the room, here comes a new player that muddies the waters again.

DNA methylation. The attaching of a methyl group (CH3) to the DNA’s cytosine by a DNA methyltransferase (Dnmt) was considered until very recently a process reserved for the immature cells in helping them meet their final fate. In other words, DNA methylation plays a role in cell differentiation by suppressing gene expression. It has other roles in X-chromosome inactivation and cancer, but it was not suspected to play a role in memory until this decade.

Oliveira (2016) gives us a nice review of the role(s) of DNA methylation in memory formation and maintenance. First, we encounter the pharmacological studies that found that injecting Dnmt inhibitors in various parts of the brain in various species disrupted memory formation or maintenance. Next, we see the genetic studies, where mice Dnmt knock-downs and knock-outs also show impaired memory formation and maintenance. Finally, knowing which genes’ transcription is essential for memory, the researcher takes us through several papers that examine the DNA de novo methylation and demethylation of these genes in response to learning events and its role in alternative splicing.

Based on these here available data, the author proposes that activity induced DNA methylation serves two roles in memory: to “on the one hand, generate a primed and more permissive epigenome state that could facilitate future transcriptional responses and on the other hand, directly regulate the expression of genes that set the strength of the neuronal network connectivity, this way altering the probability of reactivation of the same network” (p. 590).

Here you go; another morsel of actual science brought to your fingertips by yours truly.

99-dna-copy

Reference: Oliveira AM (Oct 2016, Epub 15 Sep 2016). DNA methylation: a permissive mark in memory formation and maintenance. Learning & Memory,  23(10): 587-593. PMID: 27634149, DOI: 10.1101/lm.042739.116. ARTICLE

By Neuronicus, 22 September 2016

Pic of the day: Cortex

99 cortex - Copy.jpg

Reference: von Bonin, G. (1950). Essay on the cerebral cortex. Ed. Charles C. Thomas, Springfield. ISBN 10: 0398044252, ISBN 13: 9780398044251

Image credit: geralt.The whole image: Public Domain

By Neuronicus, 15 September 2016

Who invented optogenetics?

Wayne State University. Ever heard of it? Probably not. How about Zhuo-Hua Pan? No? No bell ringing? Let’s try a different approach: ever heard of Stanford University? Why, yes, it’s one of the most prestigious and famous universities in the world. And now the last question: do you know who Karl Deisseroth is? If you’re not a neuroscientist, probably not. But if you are, then you would know him as the father of optogenetics.

Optogenetics is the newest tool in the biology kit that allows you to control the way a cell behaves by shining a light on it (that’s the opto part). Prior to that, the cell in question must be made to express a protein that is sensitive to light (i.e. rhodopsin) either by injecting a virus or breeding genetically modified animals that express that protein (that’s the genetics part).

If you’re watching the Nobel Prizes for Medicine, then you would also be familiar with Deisseroth’s name as he may be awarded the Nobel soon for inventing optogenetics. Only that, strictly speaking, he did not. Or, to be fair and precise at the same time, he did, but he was not the first one. Dr. Pan from Wayne State University was. And he got scooped.98.png

The story is at length imparted to us by Anna Vlasits in STAT and republished in Scientific American. In short, Dr. Pan, an obscure name in an obscure university from an ill-famed city (Detroit), does research for years in an unglamorous field of retina and blindness. He figured, quite reasonably, that restoring the proteins which sense light in the human eye (i.e. photoreceptor proteins) could restore vision in the congenitally blind. The problem is that human photoreceptor proteins are very complicated and efforts to introduce them into retinas of blind people have proven unsuccessful. But, in 2003, a paper was published showing how an algae protein that senses light, called channelrhodopsin (ChR), can be expressed into mammalian cells without loss of function.

So, in 2004, Pan got a colleague from Salus University (if Wayne State University is a medium-sized research university, then Salus is a really tiny, tiny little place) to engineer a ChR into a virus which Pan then injected in rodent retinal neurons, in vivo. After 3-4 weeks he obtained the expression of the protein and the expression was stable for at least 1 year, showing that the virus works nicely. Then his group did a bunch of electrophysiological recordings (whole cell patch-clamp and voltage clamp) to see if shining light on those neurons makes them fire. It did. Then, they wanted to see if ChR is for sure responsible for this firing and not some other proteins so they increased the intensity of the blue light that their ChR is known to sense and observed that the cell responded with increased firing. Now that they saw the ChR works in normal rodents, next they expressed the ChR by virally infecting mice who were congenitally blind and repeated their experiments. The electrophysiological experiments showed that it worked. But you see with your brain, not with your retina, so the researchers looked to see if these cells that express ChR project from the retina to the brain and they found their axons in lateral geniculate and superior colliculus, two major brain areas important for vision. Then, they recorded from these areas and the brain responded when blue light, but not yellow or other colors, was shone on the retina. The brain of congenitally blind mice without ChR does not respond regardless of the type of light shone on their retinas. But does that mean the mouse was able to see? That remains to be seen (har har) in future experiments. But the Pan group did demonstrate – without question or doubt – that they can control neurons by light.

All in all, a groundbreaking paper. So the Pan group was not off the mark when they submitted it to Nature on November 25, 2004. As Anna Vlasits reports in the Exclusive, Nature told Pan to submit to a more specialized journal, like Nature Neuroscience, which then rejected it. Pan submitted then to the Journal of Neuroscience, which also rejected it. He submitted it then to Neuron on November 29, 2005, which finally accepted it on February 23, 2006. Got published on April 6, 2006. Deisseroth’s paper was submitted to Nature Neuroscience on May 12, 2005, accepted on July, and published on August 14, 2005… His group infected rat hippocampal neurons cultured in a Petri dish with a virus carrying the ChR and then they did some electrophysiological recordings on those neurons while shining lights of different wavelengths on them, showing that these cells can be controlled by light.

There’s more on the saga with patent filings and a conference where Pan showed the ChR data in May 2005 and so on, you can read all about it in Scientific American. The magazine is just hinting to what I will say outright, loud and clear: Pan didn’t get published because of his and his institution’s lack of fame. Deisseroth did because of the opposite. That’s all. This is not about squabbles about whose work is more elegant, who presented his work as a scientific discovery or a technical report or whose title is more catchy, whose language is more boisterous or native English-speaker or luck or anything like that. It is about bias and, why not?, let’s call a spade a spade, discrimination. Nature and Journal of Neuroscience are not caught doing this for the first time. Not by a long shot. The problem is that they are still doing it, that is: discriminating against scientific work presented to them based on the name of the authors and their institutions.

Personally, so I don’t get comments along the lines of the fox and the grapes, I have worked at both high profile and low profile institutions. And I have seen the difference not in the work, but in the reception.

That’s my piece for today.

Source:  STAT, Scientific American.

References:

1) Bi A, Cui J, Ma YP, Olshevskaya E, Pu M, Dizhoor AM, & Pan ZH (6 April 2006). Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron, 50(1): 23-33. PMID: 16600853. PMCID: PMC1459045. DOI: 10.1016/j.neuron.2006.02.026. ARTICLE | FREE FULLTEXT PDF

2) Boyden ES, Zhang F, Bamberg E, Nagel G, & Deisseroth K. (Sep 2005, Epub 2005 Aug 14). Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience, 8(9):1263-1268. PMID: 16116447. DOI: 10.1038/nn1525. doi:10.1038/nn1525. ARTICLE 

By Neuronicus, 11 September 2016

Save

Save

Save

Save

Another puzzle piece in the autism mystery

Just like in the case of schizophrenia, hundreds of genes have been associated with autistic spectrum disorders (ASDs). Here is another candidate.

97autism - Copy

Féron et al. (2016) reasoned that most of the info we have about the genes that are behaving badly in ASDs comes from studies that used adult cells. Because ASDs are present before or very shortly after birth, they figured that looking for genetic abnormalities in cells that are at the very early stage of ontogenesis might prove to be enlightening. Those cells are stem cells. Of the pluripotent kind. FYI, based on what they can become (a.k.a how potent they are), the stem cells are divided into omipotent, pluripotent, multipotent, oligopotent, and unipotent. So the pluripotents are very ‘potent’ indeed, having the potential of producing a perfect person.

Tongue-twisters aside, the authors’ approach is sensible, albeit non-hypothesis driven. Which means they hadn’t had anything specific in mind when they had started looking for differences in gene expression between the olfactory nasal cells obtained from 11 adult ASDs sufferers and 11 age-matched normal controls. Luckily for them, as transcriptome studies have a tendency to be difficult to replicate, they found the anomalies in the expression of genes that have been already associated with ASD. But, they also found a new one, the MOCOS (MOlybdenum COfactor Sulfurase) gene, which was poorly expressed in ASDs (downregulated, in genetic speak). The enzyme is MOCOS (am I the only one who thinks that MOCOS isolated from nasal cells is too similar to mucus? is the acronym actually a backronym?).

The enzyme is not known to play any role in the nervous system. Therefore, the researchers looked to see where the gene is expressed. Its enzyme could be found all over the brain of both mouse and human. Also, in the intestine, kidneys, and liver. So not much help there.

Next, the authors deleted this gene in a worm, Caenorhabditis elegans, and they found out that the worm’s cells have issues in dealing with oxidative stress (e.g. the toxic effects of free radicals). In addition, their neurons had abnormal synaptic transmission due to problems with vesicular packaging.

Then they managed – with great difficulty – to produce human induced pluripotent cells (iPSCs) in a Petri dish in which the gene MOCOS was partially knocked down. ‘Partially’, because the ‘totally’ did not survive. Which tells us that MOCOS is necessary for survival of iPSCs. The mutant cells had less synaptic buttons than the normal cells, meaning they formed less synapses.

The study, besides identifying a new candidate for diagnosis and treatment, offers some potential explanations for some beguiling data that other studies have brought forth, like the fact that all sorts of neurotransmitter systems seem to be impaired in ADSs, all sorts of brain regions, making very hard to grab the tiger by the tail if the tiger is sprouting a new tail when you look at it, just like the Hydra’s heads. But, discovering a molecule that is involved in an ubiquitous process like synapse formation may provide a way to leave the tiger’s tail(s) alone and focus on the teeth. In the authors’ words:

“As a molecule involved in the formation of dense core vesicles and, further down, neurotransmitter secretion, MOCOS seems to act on the container rather than the content, on the vehicle rather than one of the transported components” (p. 1123).

The knowledge uncovered by this paper makes a very good piece of the ASDs puzzle. Maybe not a corner, but a good edge. Alright, even if it’s not an edge, at least it’s a crucial piece full of details, not one of those sky pieces.

Reference: Féron F, Gepner B, Lacassagne E, Stephan D, Mesnage B, Blanchard MP, Boulanger N, Tardif C, Devèze A, Rousseau S, Suzuki K, Izpisua Belmonte JC, Khrestchatisky M, Nivet E, & Erard-Garcia M (Sep 2016, Epub 4 Aug 2016). Olfactory stem cells reveal MOCOS as a new player in autism spectrum disorders. Molecular Psychiatry, 21(9):1215-1224. PMID: 26239292, DOI: 10.1038/mp.2015.106. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 31 August 2016

Painful Pain Paper

There has been much hype over the new paper published in the latest Nature issue which claims to have discovered an opioid analgesic that doesn’t have most of the side effects of morphine. If the claim holds, the authors may have found the Holy Grail of pain research chased by too many for too long (besides being worth billions of dollars to its discoverers).

The drug, called PZM21, was discovered using structure-based drug design. This means that instead of taking a drug that works, say morphine, and then tweaking its molecular structure in various ways and see if the resultant drugs work, you take the target of the drug, say mu-opioid receptors, and design a drug that fits in that slot. The search and design are done initially with sophisticated software and there are many millions of virtual candidates. So it takes a lot of work and ingenuity to select but a few drugs that will be synthesized and tested in live animals.

Manglik et al. (2016) did just that and they came up with PZM21 which, compared to morphine, is:

1) selective for the mu-opioid receptors (i.e. it doesn’t bind to anything else)
2) produces no respiratory depression (maybe a touch on the opposite side)
3) doesn’t affect locomotion
4) produces less constipation
5) produces long-lasting affective analgesia
6) and has less addictive liability

The Holy Grail, right? Weeell, I have some serious issues with number 5 and, to some extent, number 6 on this list.

Normally, I wouldn’t dissect a paper so thoroughly because, if there is one thing I learned by the end of GradSchool and PostDoc, is that there is no perfect paper out there. Consequently, anyone with scientific training can find issues with absolutely anything published. I once challenged someone to bring me any loved and cherished paper and I would tear it apart; it’s much easier to criticize than to come up with solutions. Probably that’s why everybody hates Reviewer No. 2…

But, for extraordinary claims, you need extraordinary evidence. And the evidence simply does not support the 5 and maybe 6 above.

Let’s start with pain. The authors used 3 tests: hotplate (drop a mouse on a hot plate for 10 sec and see what it does), tail-flick (give an electric shock to the tail and see how fast the mouse flicks its tail) and formalin (inject an inflammatory painful substance in the mouse paw and see what the animal does). They used 3 doses of PZM21 in the hotplate test (10, 20, and 40 mg/Kg), 2 doses in the tail-flick test (10 and 20), and 1 dose in the formalin test (20). Why? If you start with a dose-response in a test and want to convince me it works in the other tests, then do a dose-response for those too, so I have something to compare. These tests have been extensively used in pain research and the standard drug used is morphine. Therefore, the literature is clear on how different doses of morphine work in these tests. I need your dose-responses for your new drug to be able to see how it measures up, since you claim it is “more efficacious than morphine”. If you don’t want to convince me there is a dose-response effect, that’s fine too, I’ll frown a little, but it’s your choice. However, then choose a dose and stick with it! Otherwise I cannot compare the behaviors across tests, rendering one or the other test meaningless. If you’re wondering, they used only one dose of morphine in all the tests, except the hotplate, where they used two.

Another thing also related to doses. The authors found something really odd: PZM21 works (meaning produces analgesia) in the hotplate, but not the tail-flick tests. This is truly amazing because no opiate I know of can make such a clear-cut distinction between those two tests. Buuuuut, and here is a big ‘BUT” they did not test their highest dose (40mg/kg) in the tail-flick test! Why? I’ll tell you how, because I am oh sooo familiar with this argument. It goes like this:

Reviewer: Why didn’t you use the same doses in all your 3 pain tests?

Author: The middle and highest doses have similar effects in the hotplate test, ok? So it doesn’t matter which one of these doses I’ll use in the tail-flick test.

Reviewer: Yeah, right, but, you have no proof that the effects of the two doses are indistinguishable because you don’t report any stats on them! Besides, even so, that argument applies only when a) you have ceiling effects (not the case here, your morphine hit it, at any rate) and b) the drug has the expected effects on both tests and thus you have some logical rationale behind it. Which is not the case here, again: your point is that the drug DOESN’T produce analgesia in the tail-flick test and yet you don’t wanna try its HIGHEST dose… REJECT AND RESUBMIT! Awesome drug discovery, by the way!

So how come the paper passed the reviewers?! Perhaps the fact that two of the reviewers are long term publishing co-authors from the same University had something to do with it, you know, same views predisposes them to the same biases and so on… But can you do that? I mean, have reviewers for Nature from the same department for the same paper?

Alrighty then… let’s move on to the stats. Or rather not. Because there aren’t any for the hotplate or tail-flick! Now, I know all about the “freedom from the tyranny of p” movement (that is: report only the means, standard errors of mean, and confidence intervals and let the reader judge the data) and about the fact that the average scientist today needs to know 100-fold more stats that his predecessors 20 years ago (although some biologists and chemists seem to be excused from this, things either turn color or not, either are there or not etc.) or about the fact that you cannot get away with only one experiment published these days, but you need a lot of them so you have to do a lot of corrections to your stats so you don’t fall into the Type 1 error. I know all about that, but just like the case with the doses, choose one way or another and stick to it. Because there are ANOVAs ran for the formalin test, the respiration, constipation, locomotion, and conditioned place preference tests, but none for the hotplate or tailflick! I am also aware that to be published in Science or Nature you have to strip your work and wordings to the bare minimum because the insane wordcount limits, but you have free rein in the Supplementals. And I combed through those and there are no stats there either. Nor are there any power analyses… So, what’s going on here? Remember, the authors didn’t test the highest dose on the tail-flick test because – presumably – the highest and intermediary doses have indistinguishable effects, but where is the stats to prove it?

And now the thing that really really bothered me: the claim that PZM21 takes away the affective dimension of pain but not the sensory. Pain is a complex experience that, depending on your favourite pain researcher, has at least 2 dimensions: the sensory (also called ‘reflexive’ because it is the immediate response to the noxious stimulation that makes you retract by reflex the limb from whatever produces the tissue damage) and the affective (also called ‘motivational’ because it makes the pain unpleasant and motivates you to get away from whatever caused it and seek alleviation and recovery). The first aspect of pain, the sensory, is relatively easy to measure, since you look at the limb withdrawal (or tail, in the case of animals with prolonged spinal column). By contrast, the affective aspect is very hard to measure. In humans, you can ask them how unpleasant it is (and even those reports are unreliable), but how do you do it with animals? Well, you go back to humans and see what they do. Humans scream “Ouch!” or swear when they get hurt (so you can measure vocalizations in animals) or humans avoid places in which they got hurt because they remember the unpleasant pain (so you do a test called Conditioned Place Avoidance for animals, although if you got a drug that shows positive results in this test, like morphine, you don’t know if you blocked the memory of unpleasantness or the feeling of unpleasantness itself, but that’s a different can of worms). The authors did not use any of these tests, yet they claim that PZM21 takes away the unpleasantness of pain, i.e. is an affective analgesic!

What they did was this: they looked at the behaviors the animal did on the hotplate and divided them in two categories: reflexive (the lifting of the paw) and affective (the licking of the paw and the jumping). Now, there are several issues with this dichotomy, I’m not even going to go there; I’ll just say that there are prominent pain researchers that will scream from the top of their lungs that the so-called affective behaviors from the hotplate test cannot be indexes of pain affect, because the pain affect requires forebrain structures and yet these behaviors persist in the decerebrated rodent, including the jumping. Anyway, leaving the theoretical debate about what those behaviors they measured really mean aside, there still is the problem of the jumpers: namely, the authors excluded from the analysis the mice who tried to jump out of the hotplate test in the evaluation of the potency of PZM21, but then they left them in when comparing the two types of analgesia because it’s a sign of escaping, an emotionally-valenced behavior! Isn’t this the same test?! Seriously? Why are you using two different groups of mice and leaving the impression that is only one? And oh, yeah, they used only the middle dose for the affective evaluation, when they used all three doses for potency…. And I’m not even gonna ask why they used the highest dose in the formalin test… but only for the normal mice, the knockouts in the same test got the middle dose! So we’re back comparing pears with apples again!

Next (and last, I promise, this rant is way too long already), the non-addictive claim. The authors used the Conditioned Place Paradigm, an old and reliable method to test drug likeability. The idea is that you have a box with 2 chambers, X and Y. Give the animal saline in chamber X and let it stay there for some time. Next day, you give the animal the drug and confine it in chamber Y. Do this a few times and on the test day you let the animal explore both chambers. If it stays more in chamber Y then it liked the drug, much like humans behave by seeking a place in which they felt good and avoiding places in which they felt bad. All well and good, only that is standard practice in this test to counter-balance the days and the chambers! I don’t know about the chambers, because they don’t say, but the days were not counterbalanced. I know, it’s a petty little thing for me to bring that up, but remember the saying about extraordinary claims… so I expect flawless methods. I would have also liked to see a way more convincing test for addictive liability like self-administration, but that will be done later, if the drug holds, I hope. Thankfully, unlike the affective analgesia claims, the authors have been more restrained in their verbiage about addiction, much to their credit (and I have a nasty suspicion as to why).

I do sincerely think the drug shows decent promise as a painkiller. Kudos for discovering it! But, seriously, fellows, the behavioral portion of the paper could use some improvements.

Ok, rant over.

EDIT (Aug 25, 2016): I forgot to mention something, and that is the competing financial interests declared for this paper: some of its authors already filed a provisional patent for PZM21 or are already founders or consultants for Epiodyne (a company that that wants to develop novel analgesics). Normally, that wouldn’t worry me unduly, people are allowed to make a buck from their discoveries (although is billions in this case and we can get into that capitalism-old debate whether is moral to make billions on the suffering of other people, but that’s a different story). Anyway, combine the financial interests with the poor behavioral tests and you get a very shoddy thing indeed.

Reference: Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hübner H, Huang XP, Sassano MF, Giguère PM, Löber S, Da Duan, Scherrer G, Kobilka BK, Gmeiner P, Roth BL, & Shoichet BK (Epub 17 Aug 2016). Structure-based discovery of opioid analgesics with reduced side effects. Nature, 1-6. PMID: 27533032, DOI: 10.1038/nature19112. ARTICLE 

By Neuronicus, 21 August 2016