The Atlantic cod is driven to extinction by overfishing and global warming

Say bye-bye to this tasty beauty. Atlantic cod (Gadus morhua) picture by Hans Hillewaert (CC BY-SA 4.0)
Say bye-bye to this tasty beauty: the Atlantic cod (Gadus morhua). Picture by Hans Hillewaert released under CC BY-SA 4.0 license.

Pershing et al. (2015) analyzed a lot data from 1982-2013 about the Gulf of Maine temperatures, cod population metrics, and global warming indices. Global warming has hit the Gulf of Maine harder than anywhere else on the planet, with temperatures rising much faster than the rest of the global ocean during the last decade (about 0.23 Cº per YEAR). As a direct consequence, the cod population has declined very rapidly in the last two decades: “The most recent assessment found that the spawning biomass in this stock is now less than 3,000 mt, only 4% of the spawning stock biomass that gives the maximum sustainable yield” (p. 2)., which means, to my non-marine biologist understanding, that 96% of the little fishes required to make a sustainable pool for fishing are dead.

The authors go further and analyze predator behavior, zooplankton availability (which has declined due to… you are correct, that pesky global warming again) coupled with the recent heat waves and they say that, despite the horribly rapid decline in the population, the cod would have bounced back if it wasn’t for overfishing. That’s right, folks! Is not enough that global warming (which is also man-made, the nay-sayers are deluded, period) has jeopardized this species, but we made sure is on the brink of extinction by overfishing it. The quotas set for the fishing industry failed to take into the account the global warming effect of the population, setting fishing quotas for a steady-state system, which obviously the Gulf of Maine is not.

You may say, “All righty, then. Let’s fish some less cod until it bounces back. Some major fisheries will go bankrupt, but, hey, we’re saving the fishes so we can eat them later. Easy-peasy”. Not so fast. The gravity of the situation is further accentuated by the very doom and gloom predictions of a basic population dynamics model that the authors publish in the form of Fig. 3 of the paper. The cod population may bounce back, if we stop the fishing now COMPLETELY. Not a little bit, not a few here and there, not the slow and the weak, but ALL fishing needs to stop now if we want to rebuild the cod stock population. And you don’t get to say “damn the cod, I don’t eat it anyway’, because you don’t know what else might be driven to extinction by the disappearance of the cod.

I am not exaggerating here with metaphors. Read the paper and take a look at the scientists’ simulations and predictions yourselves.

Reference: Pershing AJ, Alexander MA, Hernandez CM, Kerr LA, Le Bris A, Mills KE, Nye JA, Record NR, Scannell HA, Scott JD, Sherwood GD, & Thomas AC (Epub 29 October 2015). Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery. Science, DOI: 10.1126/science.aac9819. Article | FREE FULLTEXT PDF

By Neuronicus, 30 October 2015

Grooming only half side of the body

Grooming only half side of the body. Credit: http://www.bajiroo.com/2013/04/23-guys-with-half-shaved-beard/
Grooming only half side of the body. Credit: http://www.bajiroo.com/2013/04/23-guys-with-half-shaved-beard/

Contrary to popular belief, rats and mice are very fastidious animals; they keep themselves scrupulously clean by engaging in a very meticulous routine of self-grooming. The routine is so rigorous that allows the researchers to divide the grooming sequence into four different phases, starting with the nose and whiskers and ending with the genitalia and tail. It is also a symmetrical behavior (no whisker left ungroomed, no paw unlicked).

Grooming is sensitive to dopaminergic manipulations, so Pelosi, Girault, & Hervé (2015) sought to see what happens if they destroy the dopamine fibers in the mouse brain. So they lesioned the medial forebrain bundle, which is a bunch axon fibers that contains over 80% of the midbrain dopaminergic axons. But they were tricky, they lesioned only one side.

And the results were that the lesioned mice not only exhibited less self-grooming on the opposite side to the lesion, but the behavior was rescued by L-DOPA, which is medication for Parkinson’s. That is, they gave the mice some L-DOPA and they began to merrily self-groom again on both sides of the body. The authors discuss in depths other findings, like the changes (or absence thereof) in grooming bouts, grooming time, grooming bouts, completeness of grooming etc.

The findings have significance in the Parkinson’s research, where the mild to moderate phases of the disease often present with asymmetrical motor behavior.

Reference: Pelosi A, Girault J-A, & Hervé D (23 Sept 2015). Unilateral Lesion of Dopamine Neurons Induces Grooming Asymmetry in the Mouse. PLoS One. 2015; 10(9): e0137185. doi: 10.1371/journal.pone.0137185. PMCID: PMC4580614. Article | FREE FULLTEXT PDF

By Neuronicus, 29 October 2015

Fat & afraid or slim & brave (Leptin and anxiety in ventral tegmental area)

A comparison of a mouse unable to produce leptin thus resulting in obesity (left) and a normal mouse (right). Courtesy of Wikipedia. License: PD
A comparison of a mouse unable to produce leptin thus resulting in obesity (left) and a normal mouse (right). Courtesy of Wikipedia. License: PD

Leptin is a small molecule produced mostly by the adipose tissue, whose absence is the cause of morbid obesity in the genetically engineered ob/ob mice. Here is a paper that gives us another reason to love this hormone.

Liu, Guo, & Lu (2015) build upon their previous work of investigating the leptin action(s) in the ventral tegmental area of the brain (VTA), a region that houses dopamine neurons and widely implicated in pleasure and drug addiction (among other things). They did a series of very straightforward experiments in which the either infused leptin directly into the mouse VTA or deleted the leptin receptors in this region (by using a virus in genetically engineered mice). Then they tested the mice on three different anxiety tests.

The results: leptin decreases anxiety; absence of leptin receptors increases anxiety. Simple and to the point. And also makes sense, given that leptin receptors are mostly located on the VTA neurons that project to the central amygdala, a region involved in fear and anxiety (curiously, the authors cite the amygdala papers, but do not comment on the leptin-VTA-dopamine-amygdala connection). For the specialists, I would say that they are a little liberal with their VTA hit assessment (they are mostly targeting the posterior VTA) and their GFP (green fluorescent protein) is sparsely expressed.

Reference: Liu J, Guo M, & Lu XY (Epub ahead of print 5 Oct 2015). Leptin/LepRb in the Ventral Tegmental Area Mediates Anxiety-Related Behaviors. International Journal of Neuropsychopharmacology, 1–11. doi:10.1093/ijnp/pyv115. Article | FREE PDF

By Neuronicus, 28 October 2015

How grateful would you feel after watching a Holocaust documentary? (Before you comment, READ the post first)

form Fox et al. (2015)
form Fox et al. (2015)

How would you feel if one of your favourite scientists published a paper that is, to put it in mild terms, not to their very best? Disappointed? Or perhaps secretly gleeful that even “the big ones” are not always producing pearl after pearl?

This is what happened to me after reading the latest paper of the Damasio group. Fox et al. (2015) decided to look for the neural correlates of gratitude. That is, stick people in fMRI, make them feel grateful, and see what lights up. All well and good, except they decided to go with a second-hand approach, meaning that instead of making people feel grateful (I don’t know how, maybe giving them something?), they made the participants watch stories in which gratitude may have been felt by other people (still not too too bad, maybe watching somebody helping the elderly). But, the researchers made an in-house documentary about the Holocaust and then had several actual Holocaust survivors tell their story (taken from the SC Shoah Foundation Institutes Visual History Archive), focusing on the part where their lives were saved or they were helped by others by giving them survival necessities. Then, the subjects were asked to immerse themselves in the story and tell how grateful they felt if they were the gift recipients.

I don’t know about you, but I don’t think that after watching a documentary about the Holocaust (done with powerfully evocative images and professional actor voice-overs, mind you!) and seeing people tell the horrors they’ve been through and then receiving some food or shelter from a Good Samaritan, gratitude would not have been my first feeling. Anger perhaps? That such an abominable thing as the Holocaust was perpetrated by my fellow humans? Sorrow? Sadness? Sick to my stomach? Compassion for the survivors? Maybe I am blatantly off-Gauss here, but I don’t think Damasio et co. measured what they thought they were measuring.

Anyway, for what is worth, the task produced significant activity in the medial prefrontal cortex (which is involved in so many behaviors that is not even worth listing them), along with the usual suspects in a task as ambiguous as this, like various portions of the anterior cingulate and orbitofrontal cortices.

Reference: Fox GR, Kaplan J, Damasio H, & Damasio A (30 September 2015). Neural correlates of gratitude. Frontiers in Psycholology, 6:1491. doi: 10.3389/fpsyg.2015.01491. Article | FREE FULLTEXT PDF

By Neuronicus, 27 October 2015

Save

The F in memory

"Figure 2. Ephs and ephrins mediate molecular events that may be involved in memory formation. Evidence shows that memory formation involves alterations of presynaptic neurotransmitter release, activation of glutamate receptors, and neuronal morphogenesis. Eph receptors regulate synaptic transmission by regulating synaptic release, glutamate reuptake from the synapse (via astrocytes), and glutamate receptor conductance and trafficking. Ephs and ephrins also regulate neuronal morphogenesis of axons and dendritic spines through controlling the actin cytoskeleton structure and dynamics" (Dines & Lamprecht, 2015, p. 3).
“Figure 2. Ephs and ephrins mediate molecular events that may be involved in memory formation. Evidence shows that memory formation involves alterations of presynaptic neurotransmitter release, activation of glutamate receptors, and neuronal morphogenesis. Eph receptors regulate synaptic transmission by regulating synaptic release, glutamate reuptake from the synapse (via astrocytes), and glutamate receptor conductance and trafficking. Ephs and ephrins also regulate neuronal morphogenesis of axons and dendritic spines through controlling the actin cytoskeleton structure and dynamics” (Dines & Lamprecht, 2015, p. 3).

When thinking about long-term memory formation, most people immediately picture glutamate synapses. Dines & Lamprecht (2015) review the role of a family of little known players, but with big roles in learning and long-term memory consolidation: the ephs and the ephrines.

Ephs (the name comes from erythropoietin-producing human hepatocellular, the cancer line from which the first member was isolated) are transmembranal tyrosine kinase receptors. Ephrines (Eph receptor interacting protein) bind to them. Ephrines are also membrane-bound proteins, which means that in order for the aforementioned binding to happen, cells must touch each other, or at least be in a very very cozy vicinity. They are expressed in many regions of the brain like hippocampus, amygdala, or cortex.

The authors show that “interruption of Ephs/ephrins mediated functions is sufficient for disruption of memory formation” (p. 7) by reviewing a great deal of genetic, pharmacologic, and electrophysiological studies employing a variety of behavioral tasks, from spatial memory to fear conditioning. The final sections of the review focus on the involvement of ephs/ephrins in Alzheimer’s and anxiety disorders, suggesting that drugs that reverse the impairment on eph/ephrin signaling in these brain diseases may lead to an eventual cure.

Reference: Dines M & Lamprecht R (8 Oct 2015, Epub 13 Sept 2015). The Role of Ephs and Ephrins in Memory Formation. International Journal of Neuropsychopharmacology, 1-14. doi:10.1093/ijnp/pyv106. Article | FREE FULLTEXT PDF

By Neuronicus, 26 October 2015

The FIRSTS: the discovery of the telomerase (1985)

WIKI-Working_principle_of_telomerase, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.
Telomerase at work. Credit: Fatma Uzbas. License: CC BY-S.A. 3.0

A telomere is a genetic sequence (TTAGGG for vertebrates) that is repeated at the end of the chromosomes many thousands of times and serves as a protective cap that keep the chromosome stable and protected from degradation. Every time a cell divides, the telomere length shortens. This shortening had been linked to aging or, in other words, the shorter the telomere, the shorter the lifespan. But in some cells, like the germ cells, stem cells, or malignant cells, there is an enzyme that adds the telomere sequence back on the chromosome after the cell has divided.

The telomerase has been discovered in 1984 by Carol W. Greider and Elizabeth Blackburn in a protozoan (i.e. a unicellular eukaryotic organism) commonly found in puddles and ponds called Tetrahymena. I wanted to give a synopsis of their experiments, but who better to explain the work then the authors themselves? Here is a video of Dr. Blackburn herself explaining step by step in 20 minutes the rationale and the findings of the experiments for which she and Carol W. Greider received the Nobel Prize in Physiology or Medicine in 2009. If 20 minutes of genetics just whet your appetite, perhaps you will want to watch the extended 3 hours lecture (Part 1, Part 2, Part 3).

Reference: Greider, C.W. & Blackburn, E.H. (December 1985). Identification of a specific telomere terminal transferase activity in Tetrahymena extracts. Cell. Vol. 43, Issue 2, Part 1, pg. 405-413. DOI: 10.1016/0092-8674(85)90170-9. Article | FREE FULLTEXT PDF

By Neuronicus, 25 October 2015

The FIRSTS: Axon description (1865)

Human medulla oblongata sectioned at the level of the olivary nuclei. Drawing by Deiters (1965). Original caption: Fig. 15. Durchschnitt der medulla oblongata des Menschen in der Höhe der Olive (OL). R.R Raphe, Hyi). Nervus hypoglossus. Vag. Nervus vagus, deren Kerne in V und H liegen, aber in der Zeichnung nicht feiner ausgeführt sind. Den Haupttlißil der Figur nimmt die Formatio reticularis ein mit ihren zerstreuten Ganglienzellen, und die Olive mit den zu ihr hinzutretenden Fasern des Stratum zonale; C.c crura cerebelli ad medullam oblongatam; P Pyramidenstrang.
Human medulla oblongata sectioned at the level of the olivary nuclei. Drawing by Deiters (1865). Original caption: “Fig. 15. Durchschnitt der medulla oblongata des Menschen in der Höhe der Olive (OL). R.R Raphe, Hyi). Nervus hypoglossus. Vag. Nervus vagus, deren Kerne in V und H liegen, aber in der Zeichnung nicht feiner ausgeführt sind. Den Haupttlißil der Figur nimmt die Formatio reticularis ein mit ihren zerstreuten Ganglienzellen, und die Olive mit den zu ihr hinzutretenden Fasern des Stratum zonale; C.c crura cerebelli ad medullam oblongatam; P Pyramidenstrang”.

In 1863, using the microscope, a german neuroanatomist from the University of Bonn by the name of Otto Friedrich Karl Deiters describes in exquisite detail the branch-like processes of the neuron (i. e. dendrites) and the long, single “axis cylinder” (i.e. axon). Deiters’ nucleus is named after him (the place where a good portion of the cranial nerve VIII ends).

The book with the findings is published in German, posthumously (in 1865), with preface and under the editorial guidance of Max Schultze, another famous German anatomist. I got the information from Debanne et al. (2011), which is nice review on axon physiology (my German is kindda rusty due to lack of use). But I got my hands on the original German book (see link below) and, like a kid that doesn’t know how to read yet, all I could do was marvel at the absolutely stunning drawings by OFK Deiters. Which are truly and unequivocally beautiful. See for yourself.

Neurons with axons and dendrites. Drawings buy Deiters (1865.)
Neurons with axons and dendrites. Drawings by Deiters (1865.)

Reference: Debanne D, Campanac E, Bialowas A, Carlier E, & Alcaraz G (April 2011). Axon physiology. Physiological Reviews, 91(2):555-602. doi: 10.1152/physrev.00048.2009. Article | FREE FULLTEXT PDF

Original citation: Deiters OFK (1865). Untersuchungen über Gehirn und Rückenmark des Menschen und der Säugethiere. Ed. Max Schultze, Braunschweig: Vieweg, 1865. doi: 10.5962/bhl.title.15270. Book | PDF

By Neuronicus, 24 October 2015

You were not my first choice either!

lego
Sexually receptive female mice prefer a Lego brick over a male if their prefrontal oxytocin neurons are silenced.

Over the past five years or so, dopamine stepped down from the role of the “love molecule” in favor of oxytocin, a hormone previously known mostly for its crucial role in pregnancy, labor, delivery, lactation, and breastfeeding. Since some interesting discoveries in monogamous vs. polygamous voles (a type of rodent) pointing to oxytocin as essential for bonding, many studies implicated the chemical in all sorts of behaviors, from autistic to trusting, from generosity to wound healing.

Nakajima, Görlich, & Heintz (2015) add to that body of knowledge by finding that only a small group of cells in the medial prefrontal cortex express oxytocin receptors: a subpopulation of somatostatin cortical interneurons. Moreover, these neurons are gender dimorphic, meaning they differ from male to female: the female ones have twice as many action potentials upon application of oxytocin as compared to male’s.

And here is the more interesting part:
– Females in the sexually receptive phase of their estrus whose oxytocin neurons were silenced preferred to interact with a Lego brick over a male mouse (which, as you might have guessed, in not what they typically choose).
– Females that were not in their sexually receptive phase when their oxytocin neurons were silenced still preferred to interact with a mouse (male or female) over the Lego brick.
– Silencing of other neurons had no effect on their choice.
– Silencing had no effect on the males.

Hm… there are such things out there as oxytocin inter-nasal sprays… How soon do you think until the homeopaths, naturopaths, and other charlatans market oxytocin as a potent aphrodisiac? And it will take some deaths until the slow machine of beaurocracy turns its wheels and tightens the regulations on the accessibility to the hormone. Until then… as the cartoons say, don’t try this at home! Go buy some flowers or something for your intended one… it would work better, trust me on this.

Reference: Nakajima M, Görlich A, & Heintz N (9 October 2014). Oxytocin modulates female sociosexual behavior through a specific class of prefrontal cortical interneurons. Cell. 159(2): 295–305. doi:10.1016/j.cell.2014.09.020. Article | FREE FULLTEXT PDF

By Neuronicus, 23 October 2015

Are you in love with an animal?

Sugar Candy Hearts by Petr Kratochvil. License: PD
Sugar Candy Hearts by Petr Kratochvil taken from publicdomainpictures. License: PD

Ren et al. (2015) gave sweet drink (Fanta), sweet food (Oreos), salty–vinegar food (Lays chips) or water to 422 people and then asked them about their romantic relationship; or, if they didn’t have one, about a hypothetical relationship. For hitched people, the foods or drinks had no effect on the evaluation of their relationship. In contrast, the singles who received sweets were more eager to initiate a relationship with a potential partner and evaluated more favorably a hypothetical relationship (how do you do that? I mean, if it’s hypothetical… why wouldn’t you evaluate it favorably from your singleton perspective?) Anyway, the singles who got sweets tend see things a little more on the rosy side, as opposed to the taken ones.

The rationale for doing this experiment is that metaphors alter our perceptions (fair enough). Given that many terms of endearment include reference to the taste of sweet, like “Honey”, “Sugar” or “Sweetie”, maybe this is not accidental or just a metaphor and, if we manipulate the taste, we manipulate the perception. Wait, what? Now re-read the finding above.

The authors take their results as supporting the view that “metaphorical thinking is one fundamental way of perceiving the world; metaphors facilitate social cognition by applying concrete concepts (e.g., sweet taste) to understand abstract concepts (e.g., love)” (p. 916).

So… I am left with many questions, the first being: if the sweet appelatives in a romantic relationship stem from an extrapolation of the concrete taste of sweet to an abstract concept like love, then, I wonder, what kind of concrete concept is being underlined in the prevalence of “baby” as a term of endearment? Do I dare speculate what the metaphor stands for? Should people who are referred to as “baby” by their partners alert the authorities for a possible pedophile ideation? And what do we do about the non-English cultures (apparently non-Germanic or non-Mandarin too) in which the lovey-dovey terms tend to cluster around various small objects (e.g. tassels), vegetables (e.g. pumpkin), cute onomatopoeics (I am at a loss for transcription here), or baby animals (e.g. chick, kitten, puppy). Believe me, such cultures do exist and are numerous. “Excuse me, officer, I suspect my partner is in love with an animal. Oh, wait, that didn’t come out right…”

Ok, maybe I missed something with this paper, as half-way through I failed to maintain proper focus due to an intruding – and disturbing! – image of a man, a chicken, and a tassel. So take the authors’ words when they say that their study “not only contributes to the literature on metaphorical thinking but also sheds light on an understudied factor that influences relationship initiation, that of taste” (p. 918). Oh, metaphors, how sweetly misleading you are…

Please use the “Comments” section below to share the strangest metaphor used as term of endearment you have ever heard in a romantic relationship.

Reference: Ren D, Tan K, Arriaga XB, & Chan KQ (Nov 2015). Sweet love: The effects of sweet taste experience on romantic perceptions. Journal of Social and Personal Relationships, 32(7): 905 – 921. DOI: 10.1177/0265407514554512. Article | FREE FULLTEXT PDF

By Neuronicus, 21 October 2015

Giving up? Your parvalbumin neurons may have something to do with it

Cartoon from http://i393.photobucket.com/albums/pp20/saisi24/dontgivedup.jpg, licensing unknown
Cartoon from Photobucket, licensing unknown.

One of the most ecologically-valid rodent models of depression is the learned helplessness paradigm. You get a rat or a mouse and you confine it in a cage with an electrified grid. Then you apply mild foot shocks at random intervals and of random duration for an hour (which is one session). The mouse initially tries to escape, but there is no escape; the whole floor is electrified. After a couple of sessions, the mouse doesn’t try to escape anymore; it gives up. Even when you put the mouse in a cage with an open door, so it can flee to no-pain freedom, it doesn’t attempt to do so. The interpretation is that the mouse has learned that it cannot control the environment, no matter what he does, he’s helpless, so why bother? Hence the name of the behavioral paradigm: learned helplessness.

All antidepressants on the market have been tested at one point or another against this paradigm; if the drug got the mouse to try to escape more, then the drug passed the test.

Just like in the higher vertebrate realm, there are a few animals who keep trying to escape longer than the others, before they too finally give up; we call these resilient.

Perova, Delevich, & Li (2015) looked at a type of neuron that may have something to do with the capacity of some of the mice to be resilient; the parvalbumin interneurons (PAI) from the medial prefrontal cortex (mPFC). These neurons produce GABA, the major inhibitory neurotransmitter in the brain, and modulates the activity of the nearby neurons. Thanks to the ability to genetically engineer mice to have a certain kind of cell fluoresce, the researchers were able to identify and subsequently record from and manipulate the function of the PAIs. These PAIs’ response to stimulation was weaker in helpless animals compared to resilient or controls. Also, inactivation of the PAI via a designer virus promotes helplessness.

Reference: Perova Z, Delevich K, & Li B (18 Feb 2015). Depression of Excitatory Synapses onto Parvalbumin Interneurons in the Medial Prefrontal Cortex in Susceptibility to Stress. The Journal of Neuroscience, 35(7):3201–3206. doi: 10.1523/JNEUROSCI.2670-14.2015. Article | FREE FULLTEXT PDF

By Neuronicus, 21 October 2015

Have we missed the miracle painkiller?

How a classic pain scale would look to a person with congenital insensitivity to pain.
How a classic pain scale would look to a person with congenital insensitivity to pain.

Pain insensitivity has been introduced to the larger public via TV shows from the medical drama genre (House, ER, Gray’s Anatomy, and the like). It seems fascinating to explore the consequences of a life without pain. But these shows do not feature, quite understandably, the gruesome aspects of this rare and incredibly life threatening disorder. For example, did you know that sometimes the baby teeth of these people are extracted before they reach 1 year old so they stop biting their fingers and tongues off? Or that a good portion of the people born with pain insensitivity die before reaching adulthood?

Nahorski et al. (2015) discovered a new disorder that includes pain insensitivity, along with touch insensitivity, cognitive delay, and severe other disabilities. They investigated a family where the husband and wife are double first cousins and produced offsprings. The authors had access to all the family’s DNA, including the children. Extensive analyses revealed a mutation on the gene CLTCL1 that encodes for the protein CHC22. This protein is required for the normal development of the cells that fell pain and touch, among other things.

Other genetic studies into various syndromes of painlessness have produced data that lead to discovery of new analgesics. Therefore, the hope with this study is that CHC22 may become a target for a future painkiller discovery.

But, on the side note, what made me feature this paper is more than just the potential for new analgesics; is in the last paragraph of the paper: “rodents have lost CLTCL1 and thus must have alternative pathway(s) to compensate for this. Thus, some pain research results generated in these animals may not be applicable to man” (p. 2159).

The overwhelming majority of pain research and painkiller search is done in rodents. So…. how much from what we know from rodents and translate to humans doesn’t really apply? Worse yet, how many false negatives did we discard already? What if the panaceum universalis has been tried already in mice and nobody knows what it is because it didn’t work? It’s not like there is a database of negative results published somewhere where we can all ferret and, in the light of these new discoveries, give those loser chemicals another try…. Food for thought and yet ANOTHER reason why all research should be published, not just the positive results.

Reference: Nahorski MS, Al-Gazali L, Hertecant J, Owen DJ, Borner GH, Chen YC, Benn CL, Carvalho OP, Shaikh SS, Phelan A, Robinson MS, Royle SJ, & Woods CG. (August 2015, Epub 11 Jun 2015). A novel disorder reveals clathrin heavy chain-22 is essential for human pain and touch development. Brain, 138(Pt 8):2147-2160. doi: 10.1093/brain/awv149. Article | FREE FULLTEXT PDF

By Neuronicus, 20 October 2015

The FIRSTS: Dopamine is a neurotransmitter (1957)

Swedish pharmacologist and Nobel laureate Arvid Carlsson giving a lecture at the 2011 Göteborg Science Festival. By Vogler, released under CC BY-SA 3.0
Swedish pharmacologist and Nobel laureate Arvid Carlsson giving a lecture at the 2011 Göteborg Science Festival. By Vogler, released under CC BY-SA 3.0 on Wikipedia.

Dopamine is probably the most investigated brain substance. Tens of thousands of researchers are owing their career to this small molecule. And quite a few non-scientists, too. The chemical rose to partial notoriety after several studies linked it to the Parkinson’s disease in the 60’s and but its true fame came a decade later when dopamine’s role in pleasure and addiction became apparent.

And yet not so many years before that, not only nobody thought of dopamine as a central player in the neuronal chemical waltz, but even after it was shown that it is a neurotransmitter, few believed it and the authors of those studies were a mark for ridicule. The main author, Arvid Carlsson, was finally vindicated 50 years later when he was awarded the Nobel Prize in Medicine (2000) for showing that dopamine is not a just a precursor of adrenaline and noradrenaline, but also a neurotransmitter in its own right. The work barely covered more than a column in Nature, in 1957. Basically, he and colleagues did a series of pharmacological experiments where they injected rabbits or mice with known antipsychotics and antidepressants and various monoamine precursors. And then observed the critters’ behavior.

Dopamine
Dopamine

The crux of the rationale was (still is in the pharmacological experiments): if drug X modifies behavior and this modification is reversed by drug Y, then drug X and Y interact somehow either directly or have a common target. If not, then they don’t and some other substance, let’s call it Z, is responsible for the behavior. If you go and read the actual paper, for the modern neuroscientist that has been uprooted from his/her chemistry classes, remember that 3-hydroxytyramine is dopamine,  3,4-dihydroxyphenylalanine is L-DOPA, and y’all professionals should know serotonin is 5-hydroxytryptamine with its precursor 5-hydroxytryptophan.

Reference: Carlsson, A., Lindquist, M., & Magnusson, T. (November 1957). 3,4-Dihydroxyphenylalanine and 5-hydroxytryptophan as reserpine antagonists. Nature, 180(4596): 1200. doi:10.1038/1801200a0. Article | FULLTEXT PDF | Nature cover

By Neuronicus, 19 October 2015

Making new neurons from glia. Fully functional, too!

NeuroD1 transforms glial cells into neurons. Summary of the first portion of the Guo et al. (2014) paper.
Fig. 1. NeuroD1 transforms glial cells into neurons. Summary of the first portion of the Guo et al. (2014) paper.

Far more numerous than the neurons, the glial cells have many roles in the brain, one of which is protecting an injury site from being infected. In doing so, they fill up the injury space, but they also prohibit other neurons to grow there.

Guo et al. (2015) managed to turn these glial cells into neurons. Functioning neurons, that is, fully integrated within the rest of the brain network! They did it in a mouse model of stab injury and a mouse model of Alzeihmer’s in vivo. Because a mouse is not a man, they also metamorphosized human astrocytes into functioning glutamatergic neurons in a Petri dish, that is in vitro.

It is an elegant paper that crossed all the Ts and dotted all the Is. They went to a lot of double checking in different ways (see Fig. 1) to make sure their fantastic claim is for real (this kind of double, triple, quadruple checking is what gets a paper into the Big Name journals, like Cell). Needles to say, the findings show a tremendous therapeutic potential for people with central nervous system injuries, like paralyses, strokes, Alzheimer’s, Parkinson’s, Huntington, tumor resections, and many many more. Certainly worth a read!

Reference: Guo Z, Zhang L, Wu Z, Chen Y, Wang F, & Chen G (6 Feb 2014, Epub 19 Dec 2013). In vivo direct reprogramming of reactive glial cells into functional neurons after brain injury and in an Alzheimer’s disease model. Cell Stem Cell, 14(2):188-202. doi: 10.1016/j.stem.2013.12.001. Article | FREE FULLTEXT PDF | Cell cover

By Neuronicus, 18 October 2015

Orgasm-inducing mushrooms? Not quite

Claims that there is an orgasm-inducing mushroom in Hawaii may not be entirely accurate. Drawing and licensing unknown.
Claims that there is an orgasm-inducing mushroom in Hawaii may not be entirely accurate. Author and licensing of the above drawing unknown.

A few weeks ago, the social media has bombarded us with the eye-catching news that there is a mushroom in Hawaii whose smell induces spontaneous orgasms in women, but not men, who found its smell repugnant.

Except that it appears there is no such mushroom. Turns out the 14 year old paper is written by the president of a Hawaiian company that sells organic medicinal mushrooms. Not only written, but funded, as well. This is enough to damn the credibility of any study (that’s why scientists must declare competing interest when submitting a paper). But it also seems that the study has major fundamental flaws, like not having a single objective measure (of the quantity of spores, for example), is done under non-controlled environmental conditions (the participants seem to have known what was expected from them), there have been no replications, etc. Actually, it should have been suspicious to me from the start that nothing happened in the following 14 years; you would think that such claims would have been replicated, or at least the mushroom identified. But, as they say, hindsight is 20-20. Here is some nice little reporting exposing the business in Huffington Post and ScienceAlert.

I am not blaming the science media outlets on this one too much, like IFL Science or NBC affiliate, as I thought of covering this study myself, should I have been able to get my hands on the full text of the paper. In all honesty, who wouldn’t want to read that paper, especially since the abstract speculated on the mushroom’s spores having hormone-like chemicals that mimic the human neurotransmitters released during sexual encounters? But I (and others) have searched in vain for the full text and the most parsimonious explanation is that it was buried or withdrawn.

The trite but true message is: even the science media (including this one) is prone to mistakes. Interested in something? Go to the source and read the whole paper yourself, even the small print (like the one with competing interests), and only then make an opinion. That’s why I always post the links to the original article.

Reference: Holliday, J.C. & Soule, N. (2001). Spontaneous Female Orgasms Triggered by Smell of a Newly Found Tropical Dictyphora Species. International Journal of Medicinal Mushrooms, 3: 162-167. Abstract | Debunking in The Journal of Wild Mushrooming | Debunking in Discover Magazine

By Neuronicus, 17 October 2015

Nettles are good for you in more ways than one

Urtica Urens (the small nettle). Photo by H. Zell, released under CC BY-SA 3.0. Courtesy of Wikipedia.
Urtica Urens (the small nettle). Photo by H. Zell, released under CC BY-SA 3.0. Courtesy of Wikipedia.

Many cultures, specially East-European and North-African, use nettles in their cuisine, as soups, creams, or teas. Nettles’ taste resemble spinach. Now Doukkali et al. (2015) discovered a new use for the plant.

The authors harvested Urtica urens from north Morocco, dried the plants, and prepared a methanolic extract (see p. 2 for the procedure. Don’t drink methanol!). Then, they gave the extract in 3 different doses to mice and assessed its effects in two anxiety and one locomotor test against saline (the control) and diazepam (Valium), a powerful anxiolytic from the benzodiazepine class. Like diazepam, the plant extract had anxiolytic properties, but unlike diazepam, it did not induce any locomotor effects. And this is where the big thing is: ALL benzos on the market have significant side effects in the form of drowsiness, impaired coordination, sedation and so on. Having an anxiolytic without motor impairment would be wonderful.

This is a short, simple to read paper, clearly written, and covers some classic aspects of new drug discovery (like dose-response and lethal dose assessment). The reasons why I think it did not make it to one of the big journals is the small sample size, the relatively moderate effect, and the lack of identifying the active compound (there are virtually no straight-forward behavioral studies published in Nature or Science any more; you’ve got to have the molecules, or proteins, or cells, or what-have-yous as proof that you mean hard-science business).

Or, the fact that it does not have any graphs, all data is presented in tables, which I personally enjoy, as it is oh so easy to manipulate with a graph; and the fancier looking the image, the better chances few people get it anyway. Give me tables with standard deviation any day, as I suspect is the position of the authors of the paper too. But, for the visually inclined, I made a Fig. with some of their data, took only 15 minutes in Excel.

Fig. 1. The effect of saline (S), diazepam (D), and nettle extract (N) on the light-dark test (left) and hole board test (right). Data from Doukkali et al. (2015) , graph by Neuronicus.
Fig. 1. The effect of saline (S), diazepam (D), and nettle extract (N) on the light-dark test (left) and hole board test (right). Data from Doukkali et al. (2015) , graph by Neuronicus.

Or, not to put a too fine point to it, the authors are from Morocco, so they don’t come shrouded in the A-list universities glamour. In any case, the next obvious step is to isolate the active compound and replicate its anxiolytic effects on other tests and other species.

Reference: Doukkali Z, Taghzouti K, Bouidida EL, Nadjmouddine M, Cherrah Y, & Alaoui K. (24 April 2015). Evaluation of anxiolytic activity of methanolic extract of Urtica urens in a mice model. Behavioral and Brain Functions, 11(19): 1-5. doi: 10.1186/s12993-015-0063-y. Article | FREE PDF

By Neuronicus, 16 October 2015

Really? That’s your argument?!

Photo by FreeStockPhotos.biz Collection. Released under FSP Standard License License
Photo by FreeStockPhotos.biz Collection. Released under FSP Standard License

I don’t believe there is a single human being that during an argument has not thought or exclaimed “Really? That’s your argument?” or something along those lines. The saying/attitude is meant to convey the emotional response (often contemptuous) to the identification of the opponent’s argument as weak and unworthy of debate. We seem to be very critical about other people’s reasoning when it does not match our own. On the other hand, we also seem to be a little more indulgent with the strength of our own arguments. This phenomenon has been dubbed “selective laziness”, as one is not so diligent in applying the stringent rules of rational thinking to his/her own line of argumentation.

But what happens when the argument that one so easily dismisses as invalid is one’s own? Trouche et al. (2015) managed to fool 47% (115 individuals) into believing that the arguments for a reasoning choice were their own, when, in point of fact, they were not (see Fig. 1). When asked to evaluate the “other” argument (which was their own), 56% (65 people, 27% of the whole sample) “rejected their own argument, choosing instead to stick to the answer that had been attributed to them. Moreover, these participants (Non-Detectors) were more likely to accept their own argument for the valid than for an invalid answer. These results shows that people are more critical of their own arguments when they think they are someone else’s, since they rejected over half of their own arguments when they thought that they were someone else’s”. (p. 8). I had to do this math on a PostIt, as authors were a little bit… lazy in reporting anything but percentages and no graphs.

Fig. 1 from Trouche et al. (2015). © 2015 Cognitive Science Society, Inc.
Fig. 1 from Trouche et al. (2015). © 2015 Cognitive Science Society, Inc.

The authors replicated their findings to address some limitations of the previous experiment, with similar results. And they provide some speculation about the adaptability of ‘selective laziness’, which, frankly, I think is baloney. Nevertheless, the paper quantifies and provides a way to study this reasoning bias we are all familiar with.

Reference: Trouche E, Johansson P, Hall L, & Mercier H. (9 October 2015). The Selective Laziness of Reasoning. Cognitive Science, 1-15. doi: 10.1111/cogs.12303. [Epub ahead of print]. Article | PDF

By Neuronicus, 15 October 2015

Cell phones give you hallucinations

A young businessman in a suit screaming at a cell phone. By: Benjamin Miller. License FSP Standard FreeStockPhotos.biz
Photo by Benjamin Miller. License: FSP Standard FreeStockPhotos.biz

Medical doctors (MD) are overworked, particularly when they are hatchlings (i.e. Medical School students) and fledglings (interns and residents). So overworked, that in many countries is routine to have 80-hour weeks and 30-hour shifts as residents and interns. This is a concern as it has been shown that sleep deprivation impairs learning (which is the whole point of residency) and increases the number of medical mistakes (the lack of which is the whole point of their profession).

Lin et al. (2013) show that it can do more than that. Couple internship and cell phones and you get… hallucinations. That’s right. The authors asked 73 medical interns to complete some tests before their internship, then every third, sixth, and twelfth months of their internship, and after the internship. The questionnaires were on anxiety, depression, personality, and cell phone habits and hallucinations. That is: the sensation that your cell phone is vibrating or ringing when, in fact, it is not (which fully corresponds to the definition of hallucination). And here is what they found:

 Before internship, 78% of MDs experienced phantom vibration and 27% experienced phantom ringing.
 During their 1-year internship, about 85 to 95% of MDs experienced phantom vibration and phantom ringing.
 After the internship when the MDs did no work for two weeks, 50% still had these hallucinations.

Composite figure from Lin et al. (2015) showing the interns' depression (above) and anxiety (below) scores before, during, and after internship. The differences are statistically significant.
Fig. 1. Composite figure from Lin et al. (2015) showing the interns’ depression (above) and anxiety (below) scores before, during, and after internship. The differences are statistically significant.

The MDs’ depression and anxiety were also elevated more during the internship than before or after (see Fig. 1), but there was no correlation between the hallucinations and the depression and anxiety scores.

These findings are disturbing on so many levels… Should we be worried that prolonged exposure to cell phones can produce hallucinations? Or that o good portion of the MDs have hallucinations before going to internship? Or that 90% the people in charge with your life or your child’s life are so overworked that are hallucinating on a regular basis? Fine, fine, believing that your phone is ringing or vibrating may not be such a big deal of a hallucination, compared with, let’s say, “the voices told me to give you a lethal dose of morphine”, but as a neuroscientist I beg the question: is there a common mechanism between these two types of hallucinations and, if so, what ELSE is the MD hallucinating about while reassuring you that your CAT scan is normal? Or, forget about the hallucinations, should we worry that your MD is probably more depressed and anxious than you? Or, the “good” news, that the medical interns provide “a model of stress-induced psychotic symptoms” better that previous models, as the authors put it (p. 5)? I really wish there was more research on positive things (… that was a pun; hallucinations are a positive schizophrenic symptom, look it up 🙂 ).

Reference: Lin YH, Lin SH, Li P, Huang WL, & Chen CY. (10 June 2013). Prevalent hallucinations during medical internships: phantom vibration and ringing syndromes. PLoS One, 8(6): e65152. doi: 10.1371/journal.pone.0065152. Article | FREE PDF | First time the phenomenon was documented in press

By Neuronicus, 14 October 2015

64% of psychology studies from 2008 could not be replicated

Free clipart from www.cliparthut.com
Free clipart from http://www.cliparthut.com

It’s not everyday that you are told – nay, proven! – that you cannot trust more than half of the published peer-reviewed work in your field. For nitpickers, I am using the word “proven” in its scientific sense, and not the philosophical “well, nothing can be technically really proven, etc…”

In an astonishing feat of collaboration, 270 psychologists from all over the world replicated 100 of the most prominent studies in their field, as published in 2008 in 3 leading journals: Psychological Science (leading journal in all psychology), Journal of Personality and Social Psychology (leading journal in social psychology), and Journal of Experimental Psychology: Learning, Memory, and Cognition (leading journal in cognitive psychology). All this without any formal funding! That’s right, no pay, no money, no grant (there was some philanthropy involved, after all, things cost). Moreover, they invited the original authors to take part in the replication process. Replication is possibly the most important step in any scientific endeavor; without it, you may have an interesting observation, but not a scientific fact. (Yes, I know, the investigation of some weird things that happen only once is still science. But a psychology study does not a Comet Shoemaker–Levy 9 make)

Results: 64% of the studies failed the replication test. Namely, 74% social psychology studies and 50% cognitive psychology studies failed to show significant results as originally published.

What does it mean? That the researchers intentionally faked their results? Not at all. Most likely the effects were very subtle and they were inflated by reporting biases fueled by the academic pressure and the journals’ policy to publish only positive results. Is this a plague that affects only psychology? Again, not at all; be on the lookout for a similar endeavor in cancer research and rumor has it that the preliminary results are equally scary.

There would be more to say, but I will leave you in the eloquent words of the authors themselves (p. aac4716-7):

“Humans desire certainty, and science infrequently provides it. […]. Accumulating evidence is the scientific community’s method of self-correction and is the best available option for achieving that ultimate goal: truth.”

Reference: Open Science Collaboration (28 August 2015). PSYCHOLOGY. Estimating the reproducibility of psychological science. Science, 349(6251):aac4716. doi: 10.1126/science.aac4716. Article | PDF | Science Cover | The Guardian cover | IFLS cover | Decision Science cover

By Neuronicus, 13 October 2015

Stressed out? Blame your amygdalae

amygdala
Clipart: Royalty free from http://www.cliparthut.com. Text: Neuronicus.

Sooner or later, everyone is exposed to high amounts of stress, whether it is in the form losing someone dear, financial insecurity, or health problems and so on. Most of us manage to bounce right up and continue with our lives, but there is a considerable segment of the population who do not and develop all sorts of problems, from autoimmune disorders to severe depression and anxiety. What makes those people more susceptible to stress? And, more importantly, can we do something about it (yeah, besides making the world a less stressful place)?

Swartz et al. (2015) scanned the brain of 753 healthy young adults (18-22 yrs) while performing a widely used paradigm that elicits amygdalar activation (brain structure, see pic): the subjects had to match a face appearing in the upper part of the screen with one of the faces in the lower part of the screen. The faces looked fearful, angry, surprised, or neutral and amygdalae are robustly activated when matching the fearful face. Then the authors had the participants fill out questionnaires regarding their life events and perceived stress level every 3 months over a period of 2 years (they say 4 years everywhere else in the paper minus Methods & Results, which are the sections that count if one wants to replicate; maybe this is only half of the study and they intend to follow-up to 4 years?).

The higher your baseline amygdalar activation, the higher the risk to develop anxiety disorders later on if expossed to life stressors. Yellow = amygdala. Photo credit: https://www.youtube.com/watch?v=JD44PbAOTy8, presumably copyrighted to Duke University.
The higher your baseline amygdalar activation, the higher the risk to develop anxiety disorders later on if expossed to life stressors. Yellow = amygdala. Photo credit: https://www.youtube.com/watch?v=JD44PbAOTy8, presumably copyrighted to Duke University.

The finding of the study is this: baseline amygdalar activation can predict who will develop anxiety later on. In other words, if your natural, healthy, non-stressed self has a an overactive amygdala, you will develop some anxiety disorder later on if exposed to stressors (and who isn’t?). The good news is that knowing this, the owner of the super-sensitive amygdalae, even if s/he may not be able to protect her/himself from stressors, at least can engage in some preventative therapy or counseling to be better equipped with adaptive coping mechanisms when the bad things come. Probably we could all benefit from being “better equipped with adaptive coping mechanisms”, feisty amygdalae or not. Oh, well…

Reference: Swartz, J.R., Knodt, A.R., Radtke, S.R., & Hariri, A.R. (2015). A neural biomarker of psychological vulnerability to future life stress. Neuron, 85, 505-511. doi: 10.1016/j.neuron.2014.12.055. Article | PDF | Video

By Neuronicus, 12 October 2015

It’s what I like or what you like? I don’t know anymore…

The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another's through learning. Modified from Fig. 2B from Garvert et al. (2015)
The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another’s, through learning. Modified from Fig. 2B from Garvert et al. (2015), which is an open access article under the CC BY license.

One obvious consequence of being a social mammal is that each individual wants to be accepted. Nobody likes rejection, be it from a family member, a friend or colleague, a job application, or even a stranger. So we try to mould our beliefs and behaviors to fit the social norms, a process called social conformity. But how does that happen?

Garvert et al. (2015) shed some light on the mechanism(s) underlying the malleability of personal preferences in response to information about other people preferences. Twenty-seven people had 48 chances to make a choice on whether gain a small amount of money now or more money later, with “later” meaning from 1 day to 3 months later. Then the subjects were taught another partner choices, no strings attached, just so they know. Then they were made to chose again. Then they got into the fMRI and there things got complicated, as the subjects had to choose as they themselves would choose, as their partner would choose, or as an unknown person would choose. I skipped a few steps, the procedure is complicated and the paper is full of cumbersome verbiage (e.g. “We designed a contrast that measured the change in repetition suppression between self and novel other from block 1 to block 3, controlled for by the change in repetition suppression between self and familiar other over the same blocks” p. 422).

Anyway, long story short, the behavioral results showed that the subjects tended to alter their preferences to match their partner’s (although not told to do so, it had no impact on their own money gain, there were not time constraints, and sometimes were told that the “partner” was a computer).

These behavioral changes were matched by the changes in the activation pattern of the medial prefrontal cortex (mPFC), in the sense that learning of the preferences of another, which you can imagine as a specific neural pattern in your brain, changes the way your own preferences are encoded in the same neural pattern.

Reference: Garvert MM, Moutoussis M, Kurth-Nelson Z, Behrens TE, & Dolan RJ (21 January 2015). Learning-induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron, 85(2):418-28. doi: 10.1016/j.neuron.2014.12.033. Article + FREE PDF

By Neuronicus, 11 October 2015