Epigenetics of BDNF in depression

Depression is the leading cause of disability worldwide, says the World Health Organization. The. The. I knew it was bad, but… ‘the’? More than 300 million people suffer from it worldwide and in many places fewer than 10% of these receive treatment. Lack of treatment is due to many things, from lack of access to healthcare to lack of proper diagnosis; and not in the least due to social stigma.

To complicate matters, the etiology of depression is still not fully elucidated, despite hundreds of thousand of experimental articles published out-there. Perhaps millions. But, because hundreds of thousands of experimental articles perhaps millions have been published, we know a helluva a lot about it than, say, 50 years ago. The enormous puzzle is being painstakingly assembled as we speak by scientists all over the world. I daresay we have a lot of pieces already, if not all at least 3 out of 4 corners, so we managed to build a not so foggy view of the general picture on the box lid. Here is one of the hottest pieces of the puzzle, one of those central pieces that bring the rabbit into focus.

Before I get to the rabbit, let me tell you about the corners. In the fifties people thought that depression is due to having too little neurotransmitters from the monoamine class in the brain. This thought did not arise willy-nilly, but from the observation that drugs that increase monoamine levels in the brain alleviate depression symptoms, and, correspondingly, drugs which deplete monoamines induce depression symptoms. A bit later on, the monoamine most culpable was found to be serotonin. All well and good, plenty of evidence, observational, correlational, causational, and mechanistic supporting the monoamine hypothesis of depression. But two more pieces of evidence kept nagging the researchers. The first one was that the monoamine enhancing drugs take days to weeks to start working. So, if low on serotonin is the case, then a selective serotonin reuptake inhibitor (SSRI) should elevate serotonin levels within maximum an hour of ingestion and lower symptom severity, so how come it takes weeks? The second was even more eyebrow raising: these monoamine-enhancing drugs work in about 50 % of the cases. Why not all? Or, more pragmatically put, why not most of all if the underlying cause is the same?

It took decades to address these problems. The problem of having to wait weeks until some beneficial effects of antidepressants show up has been explained away, at least partly, by issues in the serotonin regulation in the brain (e.g. autoreceptors senzitization, serotonin transporter abnormalities). As for the second problem, the most parsimonious answer is that that archeological site called DSM (Diagnostic and Statistical Manual of Mental Disorders), which psychologists, psychiatrists, and scientists all over the world have to use to make a diagnosis is nothing but a garbage bag of last century relics with little to no resemblance of this century’s understanding of the brain and its disorders. In other words, what DSM calls major depressive disorder (MDD) may as well be more than one disorder and then no wonder the antidepressants work only in half of the people diagnosed with it. As Goldberg put it in 2011, “the DSM diagnosis of major depression is made when a patient has any 5 out of 9 symptoms, several of which are opposites [emphasis added]”! He was referring to DSM-4, not that the 5 is much different. I mean, paraphrasing Goldberg, you really don’t need much of a degree other than some basic intro class in the physiology of whatever, anything really, to suspect that someone who’s sleeping a lot, gains weight, has increased appetite, appears tired or slow to others, and feels worthless might have a different cause for these symptoms than someone who has daily insomnias, lost weight recently, has decreased appetite, is hyperagitated, irritable, and feels excessive guilt. Imagine how much more understanding we would have about depression if scientists didn’t use the DSM for research. No wonder that there’s a lot of head scratching when your hypothesis, which is logically correct, paradigmatically coherent, internally consistent, flawlessly tested, turns out to be true only sometimes because your ‘depressed’ subjects are as a homogeneous group as a pack of Trail Mix.

I got sidetracked again. This time ranting against DSM. No matter, I’m back on track. So. The good thing about the work done trying to figure out how antidepressants work and psychiatrists’ minds work (DSM is written overwhelmingly by psychiatrists), scientists uncovered other things about depression. Some of the findings became clumped under the name ‘the neurotrophic hypothesis of depression’ in the early naughts. It stems from the finding that some chemicals needed by neurons for their cellular happiness are in low amount in depression. Almost two decades later, the hypothesis became mainstream theory as it explains away some other findings in depression, and is not incompatible with the monoamines’ behavior. Another piece of the puzzle found.

One of these neurotrophins is called brain-derived neurotrophic factor (BDNF), which promotes cell survival and growth. Crucially, it also regulates synaptic plasticity, without which there would be no learning and no memory. The idea is that exposure to adverse events generates stress. Stress is differently managed by different people, largely due to genetic factors. In those not so lucky at the genetic lottery (how hard they take a stressor, how they deal with it), and in those lucky enough at genetics but not so lucky in life (intense and/or many stressors hit the organism hard regardless how well you take it or how good you are at it), stress kills a lot of neurons, literally, prevents new ones from being born, and prevents the remaining ones from learning well. Including learning on how to deal with the stressors, present and future, so the next time an adverse event happens, even if it is a minor stressor, the person is way more drastically affected. in other words, stress makes you more vulnerable to stressors. One of the ways stress is doing all these is by suppressing BDNF synthesis. Without BDNF, the individual exposed to stress that is exacerbated either by genes or environment ends up unable to self-regulate mood successfully. The more that mood is not regulated, the worse the brain becomes at self-regulating because the elements required for self-regulation, which include learning from experience, are busted. And so the vicious circle continues.

Maintaining this vicious circle is the ability of stressors to change the patterns of DNA expression and, not surprisingly, one of the most common findings is that the BDNF gene is hypermethylated in depression. Hypermethylation is an epigenetic change (a change around the DNA, not in the DNA itself), meaning that the gene in question is less expressed. This means lower amounts of BDNF are produced in depression.

After this long introduction, the today’s paper is a systematic review of one of epigenetic changes in depression: methylation. The 67 articles that investigated the role of methylation in depression were too heterogeneous to make a meta-analysis out of them, so Li et al. (2019) made a systematic review.

The main finding was that, overall, depression is associated with DNA methylation modifications. Two genes stood out as being hypermethylated: our friend BDNF and SLC6A4, a gene involved in the serotonin cycle. Now the question is who causes who: is stress methylating your DNA or does your methylated DNA make you more vulnerable to stress? There’s evidence both ways. Vicious circle, as I said. I doubt that for the sufferer it matters who started it first, but for the researchers it does.

151 bdnf 5htt people - Copy

A little disclaimer: the picture I painted above offers a non-exclusive view on the causes of depression(s). There’s more. There’s always more. Gut microbes are in the picture too. And circulatory problems. And more. But the picture is more than half done, I daresay. Continuing my puzzle metaphor, we got the rabbit by the ears. Now what to do with it…

Well, one thing we can do with it, even with only half-rabbit done, is shout loud and clear that depression is a physical disease. And those who claim it can be cured by a positive attitude and blame the sufferers for not ‘trying hard enough’ or not ‘smiling more’ or not ‘being more positive’ can bloody well shut up and crawl back in the medieval cave they came from.

REFERENCES:

1. Li M, D’Arcy C, Li X, Zhang T, Joober R, & Meng X (4 Feb 2019). What do DNA methylation studies tell us about depression? A systematic review. Translational Psychiatry, 9(1):68. PMID: 30718449, PMCID: PMC6362194, DOI: 10.1038/s41398-019-0412-y. ARTICLE | FREE FULLTEXT PDF

2. Goldberg D (Oct 2011). The heterogeneity of “major depression”. World Psychiatry, 10(3):226-8. PMID: 21991283, PMCID: PMC3188778. ARTICLE | FREE FULLTEXT PDF

3. World Health Organization Depression Fact Sheet

By Neuronicus, 23 April 2019

Pic of the day: Dopamine from a non-dopamine place

147 lc da ppvn - Copy

Reference: Beas BS, Wright BJ, Skirzewski M, Leng Y, Hyun JH, Koita O, Ringelberg N, Kwon HB, Buonanno A, & Penzo MA (Jul 2018, Epub 18 Jun 2018). The locus coeruleus drives disinhibition in the midline thalamus via a dopaminergic mechanism. Nature Neuroscience,21(7):963-973. PMID: 29915192, PMCID: PMC6035776 [Available on 2018-12-18], DOI:10.1038/s41593-018-0167-4. ARTICLE

Locus Coeruleus in mania

From all the mental disorders, bipolar disorder, a.k.a. manic-depressive disorder, has the highest risk for suicide attempt and completion. If the thought of suicide crosses your mind, stop reading this, it’s not that important; what’s important is for you to call the toll-free National Suicide Prevention Lifeline at 1-800-273-TALK (8255).

The bipolar disorder is defined by alternating manic episodes of elevated mood, activity, excitation, and energy with episodes of depression characterized by feelings of deep sadness, hopelessness, worthlessness, low energy, and decreased activity. It is also a more common disease than people usually expect, affecting about 1% or more of the world population. That means almost 80 million people! Therefore, it’s imperative to find out what’s causing it so we can treat it.

Unfortunately, the disease is very complex, with many brain parts, brain chemicals, and genes involved in its pathology. We don’t even fully comprehend how the best medication we have to lower the risk of suicide, lithium, works. The good news is the neuroscientists haven’t given up, they are grinding at it, and with every study we get closer to subduing this monster.

One such study freshly published last month, Cao et al. (2018), looked at a semi-obscure membrane protein, ErbB4. The protein is a tyrosine kinase receptor, which is a bit unfortunate because this means is involved in ubiquitous cellular signaling, making it harder to find its exact role in a specific disorder. Indeed, ErbB4 has been found to play a role in neural development, schizophrenia, epilepsy, even ALS (Lou Gehrig’s disease).

Given that ErbB4 is found in some neurons that are involved in bipolar and mutations in its gene are also found in some people with bipolar, Cao et al. (2018) sought to find out more about it.

First, they produced mice that lacked the gene coding for ErbB4 in neurons from locus coeruleus, the part of the brain that produces norepinephrine out of dopamine, better known for the European audience as nor-adrenaline. The mutant mice had a lot more norepinephrine and dopamine in their brains, which correlated with mania-like behaviors. You might have noticed that the term used was ‘manic-like’ and not ‘manic’ because we don’t know for sure how the mice feel; instead, we can see how they behave and from that infer how they feel. So the researchers put the mice thorough a battery of behavioral tests and observed that the mutant mice were hyperactive, showed less anxious and depressed behaviors, and they liked their sugary drink more than their normal counterparts, which, taken together, are indices of mania.

Next, through a series of electrophysiological experiments, the scientists found that the mechanism through which the absence of ErbB4 leads to mania is making another receptor, called NMDA, in that brain region more active. When this receptor is hyperactive, it causes neurons to fire, releasing their norepinephrine. But if given lithium, the mutant mice behaved like normal mice. Correspondingly, they also had a normal-behaving NMDA receptor, which led to normal firing of the noradrenergic neurons.

So the mechanism looks like this (Jargon alert!):

No ErbB4 –> ↑ NR2B NMDAR subunit –> hyperactive NMDAR –> ↑ neuron firing –> ↑ catecholamines –> mania.

In conclusion, another piece of the bipolar puzzle has been uncovered. The next obvious step will be for the researchers to figure out a medicine that targets ErbB4 and see if it could treat bipolar disorder. Good paper!

142 erbb4 - Copy

P.S. If you’re not familiar with the journal eLife, go and check it out. The journal offers for every study a half-page summary of the findings destined for the lay audience, called eLife digest. I’ve seen this practice in other journals, but this one is generally very well written and truly for the lay audience and the non-specialist. Something of what I try to do here, minus the personal remarks and in parenthesis metacognitions that you’ll find in most of my posts. In short, the eLife digest is masterly done. As my continuous struggles on this blog show, it is tremendously difficult for a scientist to write concisely, precisely, and jargonless at the same time. But eLife is doing it. Check it out. Plus, if you care to take a look on how science is done and published, eLife publishes all the editor’s rejection notes, all the reviewers’ comments, and all the author responses for a particular paper. Reading those is truly a teaching moment.

REFERENCE: Cao SX, Zhang Y, Hu XY, Hong B, Sun P, He HY, Geng HY, Bao AM, Duan SM, Yang JM, Gao TM, Lian H, Li XM (4 Sept 2018). ErbB4 deletion in noradrenergic neurons in the locus coeruleus induces mania-like behavior via elevated catecholamines. Elife, 7. pii: e39907. doi: 10.7554/eLife.39907. PMID: 30179154 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 14 October 2018

Arc: mRNA & protein from one neuron to another

EDIT 1 [Jan 17, 2018]: I promised four days ago that I will post this, while it was still hot, but my Internet was down, thanks to the only behemoth provider in USA. And rated the worst company in the Nation, too. You definitely know by now about whom I’m talking about. Grrrr…  Anyway, here is the paper:

As promised, today’s paper talks about mRNA transfer between neurons.

Pastuzyn et al. (2018) looked at the gene Arc in neurons because they thought its Gag sequence looks suspiciously similar to some retroviruses. Could it be possible that it also behaves like a virus?

Arc is heavily involved in the immune system, is essential for the formation of long-term memories, and is involved in all sorts of diseases, like schizophrenia and Alzheimer’s, among other things.

Pastuzyn et al. (2018) is a relatively long and dense paper, albeit well written. So, I thought that this time, instead of giving you a summary of their research it would be better to give you the authors’ story directly in their own words written as subtitles in the Results section (bold letters – the authors words, normal font – mine). Warning: this is a much more jargon-dense blog post than my previous one on the same topic and, because it is so much material, I will not explain every term.

  • Fly and Tetrapod (us) Arc Genes Independently Originated from Distinct Lineages of Ty3/gypsy Retrotransposons, the phylogenomic analyses tell us, meaning the authors have done a lot of computer-assisted comparisons of similar forms of the gene in hundreds of species.
  • Arc Proteins Self-Assemble into Virus-like Capsids. Arc likes to oligomerize spontaneously (dimers and trimers). The oligomers resemble virus-like capsids, similar to HIV.
  • Arc Binds and Encapsulates RNA. Although it loves its own RNA about 10 times more than other RNAs, it’s a promiscuous protein (doesn’t care which RNA as long as it follows the rules of stoichiometry). Arc capsids encapsulate both the Arc protein (maybe other proteins too?), its mRNA, and whatever mRNA happened to be in the vicinity at the time of encapsulation. Arc capsids are able to protect the mRNA from RNAases.
  • Arc Capsid Assembly Requires RNA. If there is no RNA around, the capsids are few and poorly formed.
  • Arc Protein and Arc mRNA Are Released by Neurons in Extracellular Vesicles. Arc capsid packages Arc protein & Arc mRNA into extracellular vesicles (EV). The size of these EVs is < 100nm, putting them in the exosome category. This exosome, which the authors gave the unfortunate name of ACBAR (Arc Capsid Bearing Any RNA), is being expelled from cortical neurons in an activity-dependent manner. In other words, when neurons are stimulated, they release ACBARs.
  • Arc Mediates Intercellular Transfer of mRNA in Extracellular Vesicles. ACBARs dock to the host cell and then undergo clathrin-dependent endocytosis, meaning they expel their cargo in the host cell. The levels of Arc protein and Arc mRNA peaks in a host hippocampal cell in four hours from incubation. The ACBARs tend to congregate around donor cell’s dendrites.
  • Transferred Arc mRNA Can Undergo Activity-Dependent Translation. Activating the group 1 metabotropic glutamate receptor (mGluR1/5) by application of the agonist DHPG induces a significant increase of the amount of Arc protein in the host neurons.

This is a veritable tour de force paper. The Results section has 7 sub-sections, each with multiple experiments to dot every i and cross every t. I’m eyeballing about 40 experiments. It is true that there are 13 authors on the paper from different institutions – yeay for collaboration! – but c’mon! Is this what you need to get in Cell these days? Apparently so. Don’t get me wrong, this is an outstanding paper. But in the end it is still only one paper, which means only one first author. The rest are there for the ride because for a tenure track application nobody cares about your papers in CNS (Cell, Nature, Science = The Central Nervous System of the scientific community, har, har) if you’re not the first author. It looks like the increasing amount of work you need to be published in top tier journals these days is becoming a pet peeve of mine as I keep mentioning it (for example, here).

My pet peeves aside, Pastuzyn et al. (2018) is an excellent paper that opens interesting practical (drug delivery) and theoretical (biological repurpose of ancient invaders) gates. Kudos!

128-1 - Copy

REFERENCE: Pastuzyn ED, Day CE, Kearns RB, Kyrke-Smith M, Taibi AV, McCormick J, Yoder N, Belnap DM, Erlendsson S, Morado DR, Briggs JAG, Feschotte C, & Shepherd JD. (11 Jan 2018). The Neuronal Gene Arc Encodes a Repurposed Retrotransposon Gag Protein that Mediates Intercellular RNA Transfer. Cell, 172(1-2):275-288.e18. PMID: 29328916. doi: 10.1016/j.cell.2017.12.024. ARTICLE | FULLTEXT PDF via ResearchGate

P.S. I said that ACBAR is an unfortunate acronym because I don’t know about you but I for one wouldn’t want my discovery to be linked either with a religion or with terrorist cries, even if that link is done only by a small fraction of the population. Although I can totally see the naming-by-committee going: “ACBAR! Our exosome is the greatest! Yeay!” or “Arc Acbar! Our Arc is the greatest. Double yeay!”. On a second thought, it’s kindda nerdy geeky neat. I still wouldn’t have done it though…

By Neuronicus, 14 January 2018

EDIT 2 [Jan 22, 2018]: There is another paper that discovered that Arc forms capsids that encapsulate RNA and then shuttles it across the neuromuscular junction in Drosophila (fly). To their credit, Cell published both these papers back-to-back so no researcher gets scooped of their discovery. From what I can see, the discovery really happened simultaneously, so I modified my infopic to reflect that (both papers were submitted in January 2017, received in revised version on August 15, 2017 and published in the same issue on January 11, 2018). Here is the reference to the other article:

Ashley J, Cordy B, Lucia D, Fradkin LG, Budnik V, & Thomson T (11 Jan 2018). Retrovirus-like Gag Protein Arc1 Binds RNA and Traffics across Synaptic Boutons, Cell. 172(1-2): 262-274.e11. PMID: 29328915. doi: 10.1016/j.cell.2017.12.022. ARTICLE

EDIT 3 [Jan 29, 2018]: Dr. Shepherd, the last author of the paper I featured, was kind enough to answer a few of my questions about the implications of his and his team’s findings, answers which you will find here.

By Neuronicus, 22 January 2018

The third eye

The pineal gland has held fascination since Descartes’ nefarious claim that it is the seat of the soul. There is no evidence of that; he said it might be where the soul resides because he thought the pineal gland was the only solitaire structure in the brain so it must be special. By ‘solitaire’ I mean that all other brain structures come in doublets: 2 amygdalae, 2 hippocampi, 2 thalami, 2 hemispheres etc. He was wrong about that as well, in that there are some other singletons in the brain besides the pineal, like the anterior or posterior commissure, the cerebellar vermis, some deep brainstem and medullary structures etc.

Descartes’ dualism was the only escape route the mystics at the time had from the demands for evidence by the budding natural philosophers later known as scientists. So when some scientists noted that some lizards have a third eye on top of their head connected to the pineal gland, the mystics and, later, the conspiracy theorists went nuts. Here, see, if the seat of the soul is linked with the third eye, then the awakening of this eye in people would surely result in heightened awareness, closeness to the Divinity, oneness with Universe and other similar rubbish that can otherwise easily and reliably be achieved by a good dollop of magic mushrooms. Cheaper, too.

Back to the lizards. Yes, you read right: some lizards and frogs have a third eye. This eye is not exactly like the other two, but it has cells sensitive to light, even if they are not perceiving light in the same way the retinal cells from the lateral eyes are. It is located on the top of the skull, so sometimes it is called the parietal organ (because it is in-between the parietal skull bones, see pic).

Anolis_carolinensis_parietal_eye.CC BY-SA 3.0jpg
Dorsal view of the head of the adult Carolina anole (Anolis carolinensis) clearly showing the parietal eye (small gray/clear oval) at the top of its head. Photo by TheAlphaWolf. License: CC BY-SA 3.0, courtesy of Wikipedia.

It is believed to be a vestigial organ, meaning that primitive vertebrates might have had it as a matter of course but it disappeared in the more recently evolved animals. Importantly, birds and mammals don’t have it. Not at all, not a bit, not atrophied, not able to be “awakened” no matter what your favorite “lemme see your chakras” guru says. Go on, touch your top of the skull and see if you have some peeking soft tissue there. And no, the soft tissue that babies are born with right there on the top of the skull is not a third eye; it’s a fontanelle that allows for the rapid expansion of the brain during the first year of life.

The parietal organ’s anatomical connection to the pineal gland is not surprising at all for scientists because the pineal’s role in every single animal that has it is the regulation of some circadian rhythms by the production of melatonin. In humans, the eyes tell the pineal that is day or night and the pineal adjusts the melatonin production accordingly, i.e. less melatonin produced during the day and more during the night. Likewise, the lizards’ third eye’s main role is to provide information to the pineal about the ambient light for thermoregulatory purposes.

After this long introduction, here is the point: almost twenty years ago Xiong et al. (1998) looked at how this third eye perceives light. In the human eye, light hitting the rods and cones in the retina (reception) launches a biochemical cascade (transduction) that results in seeing (coding of the stimulus in the brain). Briefly, transduction goes thusly: the photon(s) cause(s) a special protein sensitive to light (e.g. rhodopsin) in the photoreceptor cells in the retina to split into its components (photobleaching), one of these components (11-cis-retinal) changes its conformation (photoisomerization), then activates a G-protein (transducin), which then activates the enzyme phosphodiesterase (PDE), which then destroys a nucleotide called cyclic guanosine monophosphate (cGMP), which results in the closing of the cell’s sodium ion channels, which leads to less neurotransmitter released (glutamate), which causes the nearby cells (bipolar cells) to release the same neurotransmitter, which now has the opposite effect, meaning it increases the firing rate of another set of cells (ganglion cells) and from there to the brain we go. Phew, visual transduction IS difficult. And this is the brief version.

It turns out that the third eye’s retina doesn’t have all the types of cells that the normal eyes have. Specifically, it misses the bipolar, horizontal and amacrine cells, having only ganglion and photoreception cells. So how goes the phototransduction in the third eye’s retina, if at all?

Xiong et al. (1998) isolated photoreceptor cells from the third eyes of the lizard Uta stansburiana. And then they did a bunch of electrophysiological recording on those cells under different illumination and chemical conditions.

They found that the phototransduction in the third eye is different from the lateral eyes in that when they expected to see hyperpolarization of the cell, they observed depolarization instead. Also, when they expected the PDE to break down cGMP they found that PDE is inhibited thereby increasing the amount of cGMP.  The fact that G-protein can inhibit PDE was totally unexpected and showed a novel way of cellular signaling. Moreover, they speculate that their results can make sense only if not one, but two G-proteins with opposite actions work in tandem.

A probably dumb technical question though: the human rhodopsin takes about 30 minutes to restore itself from photobleaching. Xiong et al. (1998) let the cells adapt to dark for 10 minutes before recordings. So I wonder if the results would have been slightly different if they allowed the cell more time to adapt? But I’m not an expert in retina science, you’ve seen how difficult it is, right? Maybe the lizard proteins are different or rhodopsin adaptation time has little or nothing to do with their experiments? After all, later research has shown that the third eye has its own unique opsins, like the green-sensitive parietopsin discovered by Su et al. (2006).

115 third eye - Copy

REFERENCE:  Xiong WH, Solessio EC, & Yau KW (Sep 1998). An unusual cGMP pathway underlying depolarizing light response of the vertebrate parietal-eye photoreceptor. Nature Neuroscience, 1(5): 359-365. PMID: 10196524, DOI: 10.1038/1570. ARTICLE

Additional bibliography: Su CY, Luo DG, Terakita A, Shichida Y, Liao HW, Kazmi MA, Sakmar TP, Yau KW (17 Mar 2006). Parietal-eye phototransduction components and their potential evolutionary implications. Science, 311(5767): 1617-1621. PMID: 16543463, DOI: 10.1126/science.1123802. ARTICLE

By Neuronicus, 30 March 2017

Save

Save

Save

Aging and its 11 hippocampal genes

Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.

First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.

Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.

After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.

But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.

112hc-copy

These are the 11 genes: C3 (up), Cd74  (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.

I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.

Reference: Pardo J, Abba MC, Lacunza E, Francelle L, Morel GR, Outeiro TF, Goya RG. (13 Jan 2017, Epub ahead of print). Identification of a conserved gene signature associated with an exacerbated inflammatory environment in the hippocampus of aging rats. Hippocampus, doi: 10.1002/hipo.22703. ARTICLE

By Neuronicus, 25 January 2017

Save

Save

How do you remember?

Memory processes like formation, maintenance and consolidation have been the subjects of extensive research and, as a result, we know quite a bit about them. And just when we thought that we are getting a pretty clear picture of the memory tableau and all that is left is a little bit of dusting around the edges and getting rid of the pink elephant in the middle of the room, here comes a new player that muddies the waters again.

DNA methylation. The attaching of a methyl group (CH3) to the DNA’s cytosine by a DNA methyltransferase (Dnmt) was considered until very recently a process reserved for the immature cells in helping them meet their final fate. In other words, DNA methylation plays a role in cell differentiation by suppressing gene expression. It has other roles in X-chromosome inactivation and cancer, but it was not suspected to play a role in memory until this decade.

Oliveira (2016) gives us a nice review of the role(s) of DNA methylation in memory formation and maintenance. First, we encounter the pharmacological studies that found that injecting Dnmt inhibitors in various parts of the brain in various species disrupted memory formation or maintenance. Next, we see the genetic studies, where mice Dnmt knock-downs and knock-outs also show impaired memory formation and maintenance. Finally, knowing which genes’ transcription is essential for memory, the researcher takes us through several papers that examine the DNA de novo methylation and demethylation of these genes in response to learning events and its role in alternative splicing.

Based on these here available data, the author proposes that activity induced DNA methylation serves two roles in memory: to “on the one hand, generate a primed and more permissive epigenome state that could facilitate future transcriptional responses and on the other hand, directly regulate the expression of genes that set the strength of the neuronal network connectivity, this way altering the probability of reactivation of the same network” (p. 590).

Here you go; another morsel of actual science brought to your fingertips by yours truly.

99-dna-copy

Reference: Oliveira AM (Oct 2016, Epub 15 Sep 2016). DNA methylation: a permissive mark in memory formation and maintenance. Learning & Memory,  23(10): 587-593. PMID: 27634149, DOI: 10.1101/lm.042739.116. ARTICLE

By Neuronicus, 22 September 2016

Who invented optogenetics?

Wayne State University. Ever heard of it? Probably not. How about Zhuo-Hua Pan? No? No bell ringing? Let’s try a different approach: ever heard of Stanford University? Why, yes, it’s one of the most prestigious and famous universities in the world. And now the last question: do you know who Karl Deisseroth is? If you’re not a neuroscientist, probably not. But if you are, then you would know him as the father of optogenetics.

Optogenetics is the newest tool in the biology kit that allows you to control the way a cell behaves by shining a light on it (that’s the opto part). Prior to that, the cell in question must be made to express a protein that is sensitive to light (i.e. rhodopsin) either by injecting a virus or breeding genetically modified animals that express that protein (that’s the genetics part).

If you’re watching the Nobel Prizes for Medicine, then you would also be familiar with Deisseroth’s name as he may be awarded the Nobel soon for inventing optogenetics. Only that, strictly speaking, he did not. Or, to be fair and precise at the same time, he did, but he was not the first one. Dr. Pan from Wayne State University was. And he got scooped.98.png

The story is at length imparted to us by Anna Vlasits in STAT and republished in Scientific American. In short, Dr. Pan, an obscure name in an obscure university from an ill-famed city (Detroit), does research for years in an unglamorous field of retina and blindness. He figured, quite reasonably, that restoring the proteins which sense light in the human eye (i.e. photoreceptor proteins) could restore vision in the congenitally blind. The problem is that human photoreceptor proteins are very complicated and efforts to introduce them into retinas of blind people have proven unsuccessful. But, in 2003, a paper was published showing how an algae protein that senses light, called channelrhodopsin (ChR), can be expressed into mammalian cells without loss of function.

So, in 2004, Pan got a colleague from Salus University (if Wayne State University is a medium-sized research university, then Salus is a really tiny, tiny little place) to engineer a ChR into a virus which Pan then injected in rodent retinal neurons, in vivo. After 3-4 weeks he obtained the expression of the protein and the expression was stable for at least 1 year, showing that the virus works nicely. Then his group did a bunch of electrophysiological recordings (whole cell patch-clamp and voltage clamp) to see if shining light on those neurons makes them fire. It did. Then, they wanted to see if ChR is for sure responsible for this firing and not some other proteins so they increased the intensity of the blue light that their ChR is known to sense and observed that the cell responded with increased firing. Now that they saw the ChR works in normal rodents, next they expressed the ChR by virally infecting mice who were congenitally blind and repeated their experiments. The electrophysiological experiments showed that it worked. But you see with your brain, not with your retina, so the researchers looked to see if these cells that express ChR project from the retina to the brain and they found their axons in lateral geniculate and superior colliculus, two major brain areas important for vision. Then, they recorded from these areas and the brain responded when blue light, but not yellow or other colors, was shone on the retina. The brain of congenitally blind mice without ChR does not respond regardless of the type of light shone on their retinas. But does that mean the mouse was able to see? That remains to be seen (har har) in future experiments. But the Pan group did demonstrate – without question or doubt – that they can control neurons by light.

All in all, a groundbreaking paper. So the Pan group was not off the mark when they submitted it to Nature on November 25, 2004. As Anna Vlasits reports in the Exclusive, Nature told Pan to submit to a more specialized journal, like Nature Neuroscience, which then rejected it. Pan submitted then to the Journal of Neuroscience, which also rejected it. He submitted it then to Neuron on November 29, 2005, which finally accepted it on February 23, 2006. Got published on April 6, 2006. Deisseroth’s paper was submitted to Nature Neuroscience on May 12, 2005, accepted on July, and published on August 14, 2005… His group infected rat hippocampal neurons cultured in a Petri dish with a virus carrying the ChR and then they did some electrophysiological recordings on those neurons while shining lights of different wavelengths on them, showing that these cells can be controlled by light.

There’s more on the saga with patent filings and a conference where Pan showed the ChR data in May 2005 and so on, you can read all about it in Scientific American. The magazine is just hinting to what I will say outright, loud and clear: Pan didn’t get published because of his and his institution’s lack of fame. Deisseroth did because of the opposite. That’s all. This is not about squabbles about whose work is more elegant, who presented his work as a scientific discovery or a technical report or whose title is more catchy, whose language is more boisterous or native English-speaker or luck or anything like that. It is about bias and, why not?, let’s call a spade a spade, discrimination. Nature and Journal of Neuroscience are not caught doing this for the first time. Not by a long shot. The problem is that they are still doing it, that is: discriminating against scientific work presented to them based on the name of the authors and their institutions.

Personally, so I don’t get comments along the lines of the fox and the grapes, I have worked at both high profile and low profile institutions. And I have seen the difference not in the work, but in the reception.

That’s my piece for today.

Source:  STAT, Scientific American.

References:

1) Bi A, Cui J, Ma YP, Olshevskaya E, Pu M, Dizhoor AM, & Pan ZH (6 April 2006). Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron, 50(1): 23-33. PMID: 16600853. PMCID: PMC1459045. DOI: 10.1016/j.neuron.2006.02.026. ARTICLE | FREE FULLTEXT PDF

2) Boyden ES, Zhang F, Bamberg E, Nagel G, & Deisseroth K. (Sep 2005, Epub 2005 Aug 14). Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience, 8(9):1263-1268. PMID: 16116447. DOI: 10.1038/nn1525. doi:10.1038/nn1525. ARTICLE 

By Neuronicus, 11 September 2016

Save

Save

Save

Save

Another puzzle piece in the autism mystery

Just like in the case of schizophrenia, hundreds of genes have been associated with autistic spectrum disorders (ASDs). Here is another candidate.

97autism - Copy

Féron et al. (2016) reasoned that most of the info we have about the genes that are behaving badly in ASDs comes from studies that used adult cells. Because ASDs are present before or very shortly after birth, they figured that looking for genetic abnormalities in cells that are at the very early stage of ontogenesis might prove to be enlightening. Those cells are stem cells. Of the pluripotent kind. FYI, based on what they can become (a.k.a how potent they are), the stem cells are divided into omipotent, pluripotent, multipotent, oligopotent, and unipotent. So the pluripotents are very ‘potent’ indeed, having the potential of producing a perfect person.

Tongue-twisters aside, the authors’ approach is sensible, albeit non-hypothesis driven. Which means they hadn’t had anything specific in mind when they had started looking for differences in gene expression between the olfactory nasal cells obtained from 11 adult ASDs sufferers and 11 age-matched normal controls. Luckily for them, as transcriptome studies have a tendency to be difficult to replicate, they found the anomalies in the expression of genes that have been already associated with ASD. But, they also found a new one, the MOCOS (MOlybdenum COfactor Sulfurase) gene, which was poorly expressed in ASDs (downregulated, in genetic speak). The enzyme is MOCOS (am I the only one who thinks that MOCOS isolated from nasal cells is too similar to mucus? is the acronym actually a backronym?).

The enzyme is not known to play any role in the nervous system. Therefore, the researchers looked to see where the gene is expressed. Its enzyme could be found all over the brain of both mouse and human. Also, in the intestine, kidneys, and liver. So not much help there.

Next, the authors deleted this gene in a worm, Caenorhabditis elegans, and they found out that the worm’s cells have issues in dealing with oxidative stress (e.g. the toxic effects of free radicals). In addition, their neurons had abnormal synaptic transmission due to problems with vesicular packaging.

Then they managed – with great difficulty – to produce human induced pluripotent cells (iPSCs) in a Petri dish in which the gene MOCOS was partially knocked down. ‘Partially’, because the ‘totally’ did not survive. Which tells us that MOCOS is necessary for survival of iPSCs. The mutant cells had less synaptic buttons than the normal cells, meaning they formed less synapses.

The study, besides identifying a new candidate for diagnosis and treatment, offers some potential explanations for some beguiling data that other studies have brought forth, like the fact that all sorts of neurotransmitter systems seem to be impaired in ADSs, all sorts of brain regions, making very hard to grab the tiger by the tail if the tiger is sprouting a new tail when you look at it, just like the Hydra’s heads. But, discovering a molecule that is involved in an ubiquitous process like synapse formation may provide a way to leave the tiger’s tail(s) alone and focus on the teeth. In the authors’ words:

“As a molecule involved in the formation of dense core vesicles and, further down, neurotransmitter secretion, MOCOS seems to act on the container rather than the content, on the vehicle rather than one of the transported components” (p. 1123).

The knowledge uncovered by this paper makes a very good piece of the ASDs puzzle. Maybe not a corner, but a good edge. Alright, even if it’s not an edge, at least it’s a crucial piece full of details, not one of those sky pieces.

Reference: Féron F, Gepner B, Lacassagne E, Stephan D, Mesnage B, Blanchard MP, Boulanger N, Tardif C, Devèze A, Rousseau S, Suzuki K, Izpisua Belmonte JC, Khrestchatisky M, Nivet E, & Erard-Garcia M (Sep 2016, Epub 4 Aug 2016). Olfactory stem cells reveal MOCOS as a new player in autism spectrum disorders. Molecular Psychiatry, 21(9):1215-1224. PMID: 26239292, DOI: 10.1038/mp.2015.106. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 31 August 2016

Painful Pain Paper

There has been much hype over the new paper published in the latest Nature issue which claims to have discovered an opioid analgesic that doesn’t have most of the side effects of morphine. If the claim holds, the authors may have found the Holy Grail of pain research chased by too many for too long (besides being worth billions of dollars to its discoverers).

The drug, called PZM21, was discovered using structure-based drug design. This means that instead of taking a drug that works, say morphine, and then tweaking its molecular structure in various ways and see if the resultant drugs work, you take the target of the drug, say mu-opioid receptors, and design a drug that fits in that slot. The search and design are done initially with sophisticated software and there are many millions of virtual candidates. So it takes a lot of work and ingenuity to select but a few drugs that will be synthesized and tested in live animals.

Manglik et al. (2016) did just that and they came up with PZM21 which, compared to morphine, is:

1) selective for the mu-opioid receptors (i.e. it doesn’t bind to anything else)
2) produces no respiratory depression (maybe a touch on the opposite side)
3) doesn’t affect locomotion
4) produces less constipation
5) produces long-lasting affective analgesia
6) and has less addictive liability

The Holy Grail, right? Weeell, I have some serious issues with number 5 and, to some extent, number 6 on this list.

Normally, I wouldn’t dissect a paper so thoroughly because, if there is one thing I learned by the end of GradSchool and PostDoc, is that there is no perfect paper out there. Consequently, anyone with scientific training can find issues with absolutely anything published. I once challenged someone to bring me any loved and cherished paper and I would tear it apart; it’s much easier to criticize than to come up with solutions. Probably that’s why everybody hates Reviewer No. 2…

But, for extraordinary claims, you need extraordinary evidence. And the evidence simply does not support the 5 and maybe 6 above.

Let’s start with pain. The authors used 3 tests: hotplate (drop a mouse on a hot plate for 10 sec and see what it does), tail-flick (give an electric shock to the tail and see how fast the mouse flicks its tail) and formalin (inject an inflammatory painful substance in the mouse paw and see what the animal does). They used 3 doses of PZM21 in the hotplate test (10, 20, and 40 mg/Kg), 2 doses in the tail-flick test (10 and 20), and 1 dose in the formalin test (20). Why? If you start with a dose-response in a test and want to convince me it works in the other tests, then do a dose-response for those too, so I have something to compare. These tests have been extensively used in pain research and the standard drug used is morphine. Therefore, the literature is clear on how different doses of morphine work in these tests. I need your dose-responses for your new drug to be able to see how it measures up, since you claim it is “more efficacious than morphine”. If you don’t want to convince me there is a dose-response effect, that’s fine too, I’ll frown a little, but it’s your choice. However, then choose a dose and stick with it! Otherwise I cannot compare the behaviors across tests, rendering one or the other test meaningless. If you’re wondering, they used only one dose of morphine in all the tests, except the hotplate, where they used two.

Another thing also related to doses. The authors found something really odd: PZM21 works (meaning produces analgesia) in the hotplate, but not the tail-flick tests. This is truly amazing because no opiate I know of can make such a clear-cut distinction between those two tests. Buuuuut, and here is a big ‘BUT” they did not test their highest dose (40mg/kg) in the tail-flick test! Why? I’ll tell you how, because I am oh sooo familiar with this argument. It goes like this:

Reviewer: Why didn’t you use the same doses in all your 3 pain tests?

Author: The middle and highest doses have similar effects in the hotplate test, ok? So it doesn’t matter which one of these doses I’ll use in the tail-flick test.

Reviewer: Yeah, right, but, you have no proof that the effects of the two doses are indistinguishable because you don’t report any stats on them! Besides, even so, that argument applies only when a) you have ceiling effects (not the case here, your morphine hit it, at any rate) and b) the drug has the expected effects on both tests and thus you have some logical rationale behind it. Which is not the case here, again: your point is that the drug DOESN’T produce analgesia in the tail-flick test and yet you don’t wanna try its HIGHEST dose… REJECT AND RESUBMIT! Awesome drug discovery, by the way!

So how come the paper passed the reviewers?! Perhaps the fact that two of the reviewers are long term publishing co-authors from the same University had something to do with it, you know, same views predisposes them to the same biases and so on… But can you do that? I mean, have reviewers for Nature from the same department for the same paper?

Alrighty then… let’s move on to the stats. Or rather not. Because there aren’t any for the hotplate or tail-flick! Now, I know all about the “freedom from the tyranny of p” movement (that is: report only the means, standard errors of mean, and confidence intervals and let the reader judge the data) and about the fact that the average scientist today needs to know 100-fold more stats that his predecessors 20 years ago (although some biologists and chemists seem to be excused from this, things either turn color or not, either are there or not etc.) or about the fact that you cannot get away with only one experiment published these days, but you need a lot of them so you have to do a lot of corrections to your stats so you don’t fall into the Type 1 error. I know all about that, but just like the case with the doses, choose one way or another and stick to it. Because there are ANOVAs ran for the formalin test, the respiration, constipation, locomotion, and conditioned place preference tests, but none for the hotplate or tailflick! I am also aware that to be published in Science or Nature you have to strip your work and wordings to the bare minimum because the insane wordcount limits, but you have free rein in the Supplementals. And I combed through those and there are no stats there either. Nor are there any power analyses… So, what’s going on here? Remember, the authors didn’t test the highest dose on the tail-flick test because – presumably – the highest and intermediary doses have indistinguishable effects, but where is the stats to prove it?

And now the thing that really really bothered me: the claim that PZM21 takes away the affective dimension of pain but not the sensory. Pain is a complex experience that, depending on your favourite pain researcher, has at least 2 dimensions: the sensory (also called ‘reflexive’ because it is the immediate response to the noxious stimulation that makes you retract by reflex the limb from whatever produces the tissue damage) and the affective (also called ‘motivational’ because it makes the pain unpleasant and motivates you to get away from whatever caused it and seek alleviation and recovery). The first aspect of pain, the sensory, is relatively easy to measure, since you look at the limb withdrawal (or tail, in the case of animals with prolonged spinal column). By contrast, the affective aspect is very hard to measure. In humans, you can ask them how unpleasant it is (and even those reports are unreliable), but how do you do it with animals? Well, you go back to humans and see what they do. Humans scream “Ouch!” or swear when they get hurt (so you can measure vocalizations in animals) or humans avoid places in which they got hurt because they remember the unpleasant pain (so you do a test called Conditioned Place Avoidance for animals, although if you got a drug that shows positive results in this test, like morphine, you don’t know if you blocked the memory of unpleasantness or the feeling of unpleasantness itself, but that’s a different can of worms). The authors did not use any of these tests, yet they claim that PZM21 takes away the unpleasantness of pain, i.e. is an affective analgesic!

What they did was this: they looked at the behaviors the animal did on the hotplate and divided them in two categories: reflexive (the lifting of the paw) and affective (the licking of the paw and the jumping). Now, there are several issues with this dichotomy, I’m not even going to go there; I’ll just say that there are prominent pain researchers that will scream from the top of their lungs that the so-called affective behaviors from the hotplate test cannot be indexes of pain affect, because the pain affect requires forebrain structures and yet these behaviors persist in the decerebrated rodent, including the jumping. Anyway, leaving the theoretical debate about what those behaviors they measured really mean aside, there still is the problem of the jumpers: namely, the authors excluded from the analysis the mice who tried to jump out of the hotplate test in the evaluation of the potency of PZM21, but then they left them in when comparing the two types of analgesia because it’s a sign of escaping, an emotionally-valenced behavior! Isn’t this the same test?! Seriously? Why are you using two different groups of mice and leaving the impression that is only one? And oh, yeah, they used only the middle dose for the affective evaluation, when they used all three doses for potency…. And I’m not even gonna ask why they used the highest dose in the formalin test… but only for the normal mice, the knockouts in the same test got the middle dose! So we’re back comparing pears with apples again!

Next (and last, I promise, this rant is way too long already), the non-addictive claim. The authors used the Conditioned Place Paradigm, an old and reliable method to test drug likeability. The idea is that you have a box with 2 chambers, X and Y. Give the animal saline in chamber X and let it stay there for some time. Next day, you give the animal the drug and confine it in chamber Y. Do this a few times and on the test day you let the animal explore both chambers. If it stays more in chamber Y then it liked the drug, much like humans behave by seeking a place in which they felt good and avoiding places in which they felt bad. All well and good, only that is standard practice in this test to counter-balance the days and the chambers! I don’t know about the chambers, because they don’t say, but the days were not counterbalanced. I know, it’s a petty little thing for me to bring that up, but remember the saying about extraordinary claims… so I expect flawless methods. I would have also liked to see a way more convincing test for addictive liability like self-administration, but that will be done later, if the drug holds, I hope. Thankfully, unlike the affective analgesia claims, the authors have been more restrained in their verbiage about addiction, much to their credit (and I have a nasty suspicion as to why).

I do sincerely think the drug shows decent promise as a painkiller. Kudos for discovering it! But, seriously, fellows, the behavioral portion of the paper could use some improvements.

Ok, rant over.

EDIT (Aug 25, 2016): I forgot to mention something, and that is the competing financial interests declared for this paper: some of its authors already filed a provisional patent for PZM21 or are already founders or consultants for Epiodyne (a company that that wants to develop novel analgesics). Normally, that wouldn’t worry me unduly, people are allowed to make a buck from their discoveries (although is billions in this case and we can get into that capitalism-old debate whether is moral to make billions on the suffering of other people, but that’s a different story). Anyway, combine the financial interests with the poor behavioral tests and you get a very shoddy thing indeed.

Reference: Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hübner H, Huang XP, Sassano MF, Giguère PM, Löber S, Da Duan, Scherrer G, Kobilka BK, Gmeiner P, Roth BL, & Shoichet BK (Epub 17 Aug 2016). Structure-based discovery of opioid analgesics with reduced side effects. Nature, 1-6. PMID: 27533032, DOI: 10.1038/nature19112. ARTICLE 

By Neuronicus, 21 August 2016

Fructose bad effects reversed by DHA, an omega-3 fatty acid

Despite alarm signals raised by various groups and organizations regarding the dangers of the presence of sugars – particularly fructose derived from corn syrup – in almost every food in the markets, only in the past decade there has been some serious evidence against high consumption of fructose.

A bitter-sweet (sic!) paper comes from Meng et al. (2016) who, in addition to showing some bad things that fructose does to brain and body, it also shows some rescue from its deleterious effects by DHA (docosahexaenoic acid), an omega-3 fatty acid.

The authors had 3 groups of rodents: one group got fructose in their water for 6 weeks, another group got fructose and DHA, and another group got their normal chow. The amount of fructose was calculated to be ecologically valid, meaning that they fed the animals the equivalent of 1 litre soda bottle per day (130 g of sugar for a 60 Kg human).

The rats that got fructose had worse learning and memory performance at a maze test compared to the other two groups.

The rats that got fructose had altered gene expression in two brain areas: hypothalamus (involved in metabolism) and hippocampus (involved in learning and memory) compared to the other two groups.

The rats that got fructose had bad metabolic changes that are precursors for Type 2 diabetes, obesity and other metabolic disorders (high blood glucose, triglycerides, insulin, and insulin resistance index) compared to the other two groups.

86 - Copy.jpg

The genetic analyses that the researchers did (sequencing the RNA and analyzing the DNA methylation) revealed a whole slew of the genes that had been affected by the fructose treatment. So, they did some computer work that involved Bayesian modeling  and gene library searching and they selected two genes (Bgn and Fmod) out of almost a thousand possible candidates who seemed to be the drivers of these changes. Then, they engineered mice that lacked these genes. The resultant mice had the same metabolic changes as the rats that got fructose, but… their learning and memory was even better than that of the normals? I must have missed something here. EDIT: Well… yes and no. Please read the comment below from the Principal Investigator of the study.

It is an ok paper, done by the collaboration of 7 laboratories from 3 countries. But there are a few things that bother me, as a neuroscientist, about it. First is the behavior of the genetic knock-outs. Do they really learn faster? The behavioral results are not addressed in the discussion. Granted, a genetic knockout deletes that gene everywhere in the brain and in the body, whereas the genetic alterations induced by fructose are almost certainly location-specific.

Which brings me to the second bother: nowhere in the paper (including the supplemental materials, yeas, I went through those) are any brain pictures or diagrams or anything that can tell us which nuclei of the hypothalamus the samples came from. Hypothalamus is a relatively small structure with drastically different functional nuclei very close to one another. For example, the medial preoptic nucleus that deals with sexual hormones is just above the suprachiasmatic nucleus that deals with circadian rhythms and near the preoptic is the anterior nucleus that deals mainly with thermoregulation. The nuclei that deal with hunger and satiety (the lateral and the ventromedial nucleus, respectively) are located in different parts of the hypothalamus. In short, it would matter very much where they got their samples from because the transcriptome and methylome would differ substantially from nucleus to nucleus. Hippocampus is not so complicated as that, but it also has areas with specialized functions. So maybe they messed up the identification of the two genes Bgn and Fmod as drivers of the changes; after all, they found almost 1 000 genes altered by fructose. And that mess-up might have been derived by their blind hypothalamic and hippocampal sampling. EDIT: They didn’t mess up,  per se. Turns out there were technical difficulties of extracting enough nucleic acids from specific parts of hypothalamus for analyses. I told you them nuclei are small…

Anyway, the good news comes from the first experiment, where DHA reverses the bad effects of fructose. Yeay! As a side note, the fructose from corn syrup is metabolized differently than the fructose from fruits. So you are far better off consuming the equivalent amount on fructose from a litre of soda in fruits. And DHA comes either manufactured from algae or extracted from cold-water oceanic fish oils (but not farmed fish, apparently).

If anybody that read the paper has some info that can help clarify my “bothers”, please do so in the Comment section below. The other media outlets covering this paper do not mention anything about the knockouts. Thanks! EDIT: The last author of the paper, Dr. Yang, was very kind and clarified a bit of my “bothers” in the Comments section. Thanks again!

Reference: Meng Q, Ying Z, Noble E, Zhao Y, Agrawal R, Mikhail A, Zhuang Y, Tyagi E, Zhang Q, Lee J-H, Morselli M, Orozco L, Guo W, Kilts TM, Zhu J, Zhang B, Pellegrini M, Xiao X, Young MF, Gomez-Pinilla F, Yang X (2016). Systems Nutrigenomics Reveals Brain Gene Networks Linking Metabolic and Brain Disorders. EBioMedicine, doi: 10.1016/j.ebiom.2016.04.008. Article | FREE fulltext PDF | Supplementals | Science Daily cover | NeuroscienceNews cover

By Neuronicus, 24 April 2016

Autism cure by gene therapy

shank3 - Copy

Nothing short of an autism cure is promised by this hot new research paper.

Among many thousands of proteins that a neuron needs to make in order to function properly there is one called SHANK3 made from the gene shank3. (Note the customary writing: by consensus, a gene’s name is written using small caps and italicized, whereas the protein’s name that results from that gene expression is written with caps).

This protein is important for the correct assembly of synapses and previous work has shown that if you delete its gene in mice they show autistic-like behavior. Similarly, some people with autism, but by far not all, have a deletion on Chromosome 22, where the protein’s gene is located.

The straightforward approach would be to restore the protein production into the adult autistic mouse and see what happens. Well, one problem with that is keeping the concentration of the protein at the optimum level, because if the mouse makes too much of it, then the mouse develops ADHD and bipolar.

So the researchers developed a really neat genetic model in which they managed to turn on and off the shank3 gene at will by giving the mouse a drug called tamoxifen (don’t take this drug for autism! Beside the fact that is not going to work because you’re not a genetically engineered mouse with a Cre-dependent genetic switch on your shank3, it is also very toxic and used only in some form of cancers when is believed that the benefits outweigh the horrible side effects).

In young adult mice, the turning on of the gene resulted in normalization of synapses in the striatum, a brain region heavily involved in autistic behaviors. The synapses were comparable to normal synapses in some aspects (from the looks, i.e. postsynaptic density scaffolding, to the works, i.e. electrophysiological properties) and even more so in others (more dendritic spines than normal, meaning more synapses, presumably). This molecular repair has been mirrored by some behavioral rescue: although these mice still had more anxiety and more coordination problems than the control mice, their social aversion and repetitive behaviors disappeared. And the really really cool part of all this is that this reversal of autistic behaviors was done in ADULT mice.

Now, when the researchers turned the gene on in 20 days old mice (which is, roughly, the equivalent of the entering the toddling stage in humans), all four behaviors were rescued: social aversion, repetitive, coordination, and anxiety. Which tells us two things: first, the younger you intervene, the more improvements you get and, second and equally important, in adult, while some circuits seem to be irreversibly developed in a certain way, some other neural pathways are still plastic enough as to be amenable to change.

Awesome, awesome, awesome. Even if only a very small portion of people with autism have this genetic problem (about 1%), even if autism spectrum disorders encompass such a variety of behavioral abnormalities, this research may spark hope for a whole range of targeted gene therapies.

Reference: Mei Y, Monteiro P, Zhou Y, Kim JA, Gao X, Fu Z, Feng G. (Epub 17 Feb 2016). Adult restoration of Shank3 expression rescues selective autistic-like phenotypes. Nature. doi: 10.1038/nature16971. Article | MIT press release

By Neuronicus, 19 February 2016

Save

The Firsts: Anandamide (1992)

seedling cannabis-1062908_1920
Cannabis, the plant whose psychoactive tetrahydrocannabinol (THC) binds to the same receptors in the brain as anandamide.

A rare tragedy took place in France a few days ago when a Phase I clinical trial for a new drug destined to improve mood and alleviate pain has resulted in one person dead and five other hospitalized. Phase I means that the drug successfully passed all animal tests and was being tried for the first time in humans to test its safety (efficacy and potency are tested in phase II and III, respectively).

The trial has been suspended and an investigation is on the way. So far, it appears that both the manufacturer (Bial) and the testing company (Biotrial) have followed all the guidelines and regulations. The running hypothesis is that the drug (BIA 10-2474) is acting on an unexpected target. What does that mean?

BIA 10-2474 is a FAAH inhibitor (fatty acid amide hydrolase). This enzyme breaks down anandamide, which is an endocannabinoid. In other words, is a neurotransmitter in the brain that binds to the same receptors as THC, the main active component of marijuana. So, if you give someone BIA 10-2474, the result would be an increase in the availability of anandamide, presumably with anxiolytic and analgesic effects (yes, similar to smoking weed).

There are other FAAH inhibitors out there that had been previously tried in humans and they were never marketed not because they were unsafe, but because they were ineffective in producing the desired results, i.e. less pain and/or anxiety.

So we don’t know yet why BIA 10-2474 killed people, but the bet is that in addition to FAAH, it also binds to some other protein. Why they didn’t discover this in animal trials, is a mystery; perhaps the unknown protein is unique to humans? By the looks of the drug’s structure, I think is computer generated, meaning is composed of a bunch of functional groups that someone put together in the hopes that it would fit neatly on the target binding site; but so many functional groups thrown in together might bind unexpectedly to other places than the intended. More on the story in Nature.

Anyway, that was the very long intro to today’s featured paper: the discovery of anandamide. Which happened very recently, in 1992, by the Mechoulam group at the Hebrew University of Jerusalem, Israel. Anandamide is the first endocannabinoid to be isolated. Mechoulam’s postodcs, William Devane and Lumir Hanus, used mass spectroscopy and NMR (nuclear magnetic resonance, MRI is an application of the same principles) to identify and isolate the molecule in a pig brain. And then they named it, fittingly, the “amide of bliss”…

Of note, members of the same Mechoulam group identified two more of the six known endocannabinoids. The three pages paper is highly technical, but I am assured (by a chemist) that is an easy-peasy read for any organic chemist.

Reference: Devane WA, Hanus L, Breuer A, Pertwee RG, Stevenson LA, Griffin G, Gibson D, Mandelbaum A, Etinger A, & Mechoulam R (18 Dec 1992). Isolation and structure of a brain constituent that binds to the cannabinoid receptor. Science, 258(5090):1946-9. PMID: 1470919, DOI: 10.1126/science.1470919.  Article | Research Gate Full Text

By Neuronicus, 18 January 2016

CCL11 found in aged but not young blood inhibits adult neurogenesis

vil - Copy
Portion of Fig. 1 from Villeda et al. (2011, doi: 10.1038/nature10357) describing the parabiosis procedure. Basically, under complete anesthesia, the peritoneal membranes and the skins of the two mice were sutured together. The young mice were 3–4 months (yellow) and old mice were 18–20 months old (grey).

My last post was about parabiosis and its sparse revival as a technique in physiology experiments. Parabiosis is the surgical procedure that joins two living animals allowing them to share their circulatory systems. Here is an interesting paper that used the method to tackle blood’s contribution to neurogenesis.

Adult neurogenesis, that is the birth of new neurons in the adult brain, declines with age. This neurogenesis has been observed in some, but not all brain regions, called neurogenic niches.

Because these niches occur in blood-rich areas of the brain, Villeda et al. (2011) wondered if, in addition with the traditional factors required for neurogenesis like enrichment or running, blood factors may also have something to do with neurogenesis. The authors made a young and an old mouse to share their blood via parabiosis (see pic.).

Five weeks after the parabiosis procedure, the young mouse had decreased neurogenesis and the old mouse had increased neurogenesis compared to age-matched controls. To make sure their results are due to something in the blood, they injected plasma from an old mouse into a young mouse and that also resulted in reduced neurogenesis. Moreover, the reduced neurogenesis was correlated with impaired learning as shown by electrophysiological recordings from the hippocampus and from behavioral fear conditioning.

So what in the blood does it? The authors looked at 66 proteins found in the blood (I don’t know the blood make-up, so I can’t tell if 66 is a lot or not ) and noticed that 6 of these had increased levels in the blood of ageing mice whether linked by parabiosis or not. Out of these six, the authors focus on CCL11 (unclear to me why that one, my bet is that they tried the others too but didn’t have enough data). CCLL11 is a small signaling protein involved in allergies. So the authors injected it into young mice and Lo and Behold! there was decreased neurogenesis in their hippocampus. Maybe the vampires were onto something, whadda ya know? Just kidding… don’t go around sucking young people’s blood!

This paper covers a lot of work and, correspondingly, has no less than 23 authors and almost 20 Mb of supplemental documents! The story it tells is very interesting and as complete as it gets, covering many aspects of the problems investigated and many techniques to address those problems. Good read.

Reference: Villeda SA, Luo J, Mosher KI, Zou B, Britschgi M, Bieri G, Stan TM, Fainberg N, Ding Z, Eggel A, Lucin KM, Czirr E, Park JS, Couillard-Després S, Aigner L, Li G, Peskind ER, Kaye JA, Quinn JF, Galasko DR, Xie XS, Rando TA, Wyss-Coray T. (31 Aug 2011). The ageing systemic milieu negatively regulates neurogenesis and cognitive function. Nature. 477(7362):90-94. doi: 10.1038/nature10357. Article | FREE Fulltext PDF

By Neuronicus, 6 January 2016

Herpes viruses infect neurons

virus EBV
FIG. 3 from Jha et al. (2015). Wild-type EBV infection of primary human fetal neurons. Fluorescence microscopy was carried out at 2, 4, 6, and 8 days post infection to monitor for GFP expression (the fluorescent label). Microscopy images were captured at x20 magnification.

For some mysterious reason, whether Epstein-Barr virus (EBV) and Kaposi’s sarcoma-associated herpesvirus (KSHV) can infect neurons has not been established until now. Probably because some viruses from the same family do not infect neurons, so it was assumed that EPV and KSHV do not either.

Jha et al. (2015) cultured Sh-Sy5y neuroblastoma cells, teratocarcinoma Ntera2 neurons, and primary human fetal neurons in Petri dishes and then exposed them to these viruses. After infection, the authors did some fluorescence microscopy (they tagged the viruses with fluorescent dyes), real-time PCR (to confirm there was viral RNA in the cells), immunofluorescence assays (to see if the viral proteins are expressed) and Western blot analyses (to see if the specific viral antigens are made). All these showed that the viruses were happily multiplying in the cells.

Now here comes the significance of the study: EBV and KSHV are viruses associated with all sorts of nasty diseases like mononucleosis and cancers. EBV has also been associated with neurological disorders, like multiple sclerosis, Alzheimer’s, neuropathies, lymphomas etc. But the critical word here is “associated”. That is, they found these viruses in people suffering from those diseases. So the knowledge that these viruses infect neurons could point to a mechanism behind these associations. Unfortunately, EBV is present in 90-95% of the population of the world. Which means that you will find this virus in, let’s say, 9 out of 10 people suffering from Alzheimer’s, assuming normal distributions and random sampling. So the virus’s presence maybe completely unrelated to the disease. By the same rationale, you would find the virus in 9 out of 10 people found guilty of theft, for example. It would be then interesting to see find what is NOT associated with.

Caveat: I have not read the association studies, so my argument holds only if what they report is that people with disease X also have EBV. If they made, however, a comparisons and found out that people with disease X are significantly more likely to be infected with EBV than the ones without the disease, then the argument does not hold.

Reference: Jha HC, Mehta D, Lu J, El-Naccache D, Shukla SK, Kovacsics C, Kolson D, Robertson ES (1 Dec 2015). Gammaherpesvirus Infection of Human Neuronal Cells. MBio,  6(6). pii: e01844-15. doi: 10.1128/mBio.01844-15. Article | FREE FULLTEXT PDF | PsyPost cover

By Neuronicus, 7 December 2015

The FIRSTS: the isolation of tryptophan (1901)

thanksgiving
The post-Thanksgiving dinner drowsiness is due to the very carbohydrates-rich meal and not to the amounts of tryptophan in the turkey meat, which are not higher that those in chicken.

There is a myth that says the post-Thanksgiving dinner drowsiness is due to high amounts of tryptophan found in the turkey meat. Nothing farther from the truth; in fact, it is due to the high amounts of carbohydrates in the Thanksgiving dinner which trigger massive insulin production. Anyway, the myth still goes on, despite evidence that the turkey has about the same amount of tryptophan as the chicken. That being said, what’s this tryptophan business?

Tryptophan is an amino acid necessary for many things in the body, including the production of serotonin, a brain neurotransmitter. You cannot live without it and your body cannot make it. Thus, you need to eat it. There are many sources of tryptophan, like eggs, soybeans, cheeses, various meats and so on.

Tryptophan was first isolated by Hopkins & Cole (1901) through hydrolysis of casein, a protein found in milk. And there were no two ways about it: “there is indeed not the smallest doubt that our substance is the much-sought tryptophane” (p. 427). No “we’re confident that…”, “we’re suggesting this…”, no maybe, possibly, probably, and most likely’s that one finds in an overwhelming abundance in the cautious tone adopted by today’s studies. Many more scientists today, fewer job openings, one has a career to think about…

Digression aside, Hopkins went on later to prove that tryptophan is an essential amino acid by feeding mice a tryptophan-free diet (and the mice died). By 1929 he was knighted and he got the Nobel prize for his contributions in the vitamin field. Also, a little known fact for you, butter lovers, Hopkins proved that margarine is worse that butter because it lacks certain vitamins and you have him to thank for the vitamin-enriched margarine that you find today.

Reference: Hopkins FG & Cole SW (Dec 1901). A contribution to the chemistry of proteids: Part I. A preliminary study of a hitherto undescribed product of tryptic digestion. The Journal of Physiology, 27 (4-5): 418–28. doi:10.1113/jphysiol.1901.sp000880. PMC 1540554. PMID 16992614. Article | FREE FULLTEXT PDF

By Neuronicus, 27 November 2015

Kinesin in axon regeneration

Fig. 8 from Lu, Lakonishok, & Gelfand (2015). License: Creative Commons 2.
Fig. 8 from Lu, Lakonishok, & Gelfand (2015). License: Creative Commons 2.

The longest neuron that a human has is from the spinal cord to the tip of the toes. As a cell, it needs various proteins in various places. How is this transport done? Surely not by diffusion, the proteins would degrade or would arrive at inopportune membrane-moments (I just coined that). Molecular motors, on the other hand, are toiling proteins which haul huge cargoes for the benefit of the cell in an incredibly ingenious manner (they have feet and sticky soles and gears and so on). Notable motors are kinesin and dynein, the former brings stuff to the terminal buttons of the axon, the latter goes in the opposite direction, to the soma. They walk on a railway-like scaffold in a very funny manner, if you are to believe the simulations. Go on, I dare you, search kinesin or dynein animation on Google or YouTube and tell me then that biology is not funny.

And because no self-respectable scientist can work with the molecular motors without adding his/her contribution to the above-mentioned wealth of animations, the paper below comes with no less than 9 movies (as online supplemental material)! Lu et al. (2015) focused their attention on the role of kinesin in injured neurons. The authors dyed several types of proteins in fly neurons and then cultured the cells in a Petri dish. And then cut their axons with a glass needle. After that, they used a really fancy microscope (and a good microscopist, you should look at their pictures) to look at what happens. Which is this: the cut activates a c-Jun N-terminal kinase cascade (the cell’s response to stress), which leads to sliding of microtubules (part of cell’s cytoskeleton), which is com­pletely dependent on kinesin-1 heavy chain. This sliding initiates axonal regeneration (see picture).

I believe the kinesins and dyneins are the most charming, funny, and endearing proteins out there. Yes, I’m anthropomorphizing clumps of amino acids. I know, I’m a geek.

Reference: Lu W, Lakonishok M, & Gelfand VI (1 Apr 2015, Epub 5 Feb 2015). Kinesin-1–powered microtubule sliding initiates axonal regeneration in Drosophila cultured neurons. Molecular Biology of the Cell, 26(7):1296-307. doi: 10.1091/mbc.E14-10-1423. Article | FREE FULLTEXT PDF | Supplemental movies

Some youtube videos I mentioned before, quite accurate, too: best in show

by Neuronicus, 12 November 2015

Putative mechanism for decreased spermatogenesis following SSRI

fishThe SSRIs (selective serotonin reuptake inhibitors) are the most commonly prescribed antidepressants around the world. Whether is Prozac, Zoloft or Celexa, chances are that 1 in 4 Americans (or 1 in 10, depending on the study) will be making a decision during their lifetime to start an antidepressant course or not. And yet adherence to treatment is significantly low, as many people get off the SSRI due to their side effects, one of the main complains being sexual dysfunction in the form of low libido and pleasure.

Now a new study finds a mechanism for an even more worrisome effect of citalopram, (Celexa), an SSRI: the reduction of spermatogenesis. Prasad et al. (2015) used male zebrafish as a model and exposed them to citalopram in 3 different doses for 2- or 4-weeks period. They found out that the expression in the brain of the serotonin-related genes (trp2 and sert) and gonadotropin genes (lhb, sdhb, gnrh2, and gnrh3) were differently affected depending on the dose and durations of treatment. In the testes, the “long-term medium- and high-dose citalopram treatments displayed a drastic decrease in the developmental stages of spermatogenesis as well as in the matured sperm cell count” (p. 5). The authors also looked at how the neurons are organized and they found out that the serotonin fibers are associated with the fibers of the neurons that release gonadotropin-releasing hormone 3 (GnRH3) in preoptic area, a brain region in the hypothalamus heavily involved in sexual and parental behavior in both humans and fish.

Shortly put, in the brain, the citalopram affects gene expression profiles and fiber density of the serotonin neurons, which in turn decreases the production of GnRH3, which may account for the sexual dysfunctions that follow citalopram. In the testes, citalopram may act directly by binding to the local serotonin receptors and decrease spermatogenesis.

Reference: Prasad P, Ogawa S, & Parhar IS. (Oct 2015, Epub 8 Jul 2015). Serotonin Reuptake Inhibitor Citalopram Inhibits GnRH Synthesis and Spermatogenesis in the Male Zebrafish. Biololy of Reproduction. 93(4):102, 1-10. doi: 10.1095/biolreprod.115.129965. Article | FREE FULLTEXT PDF

By Neuronicus, 11 November 2015

The culprit in methamphetamine-induced psychosis is very likely BDNF

Psychoses. Credit: NIH (Publication Number 15-4209) & Neuronicus.
Psychoses. Credit: NIH (Publication Number 15-4209) & Neuronicus. License: PD.

Methamphetamine prolonged use may lead to psychotic episodes in the absence of the drug. These episodes are persistent and closely resemble schizophrenia. One of the (many) molecules involved in both schizophrenia and meth abuse is BDNF (brain derived neurotrophic factor), a protein mainly known for its role in neurogenesis and long-term memory.

Lower BDNF levels have been observed in schizophrenia, therefore Manning et al. (2015) wondered if it’s also involved in meth-induced psychosis. So they got normal mice and mice that were genetically engineered to express lower levels of BDNF. They gave them meth for 3 weeks, with escalating doses form one week to the next. Interestingly, no meth on weekends, which made me rapidly scroll to the beginning of the paper and confirm my suspicion that the experiments were not done in USA; if they were, the grad students would not have had the weekends off and mice would have received meth every day, including weekends. Look how social customs can influence research! Anyway, social commentary aside, after the meth injections, the researchers let the mice untroubled for 2 more weeks. And then they tested them on a psychosis test.

How do you measure psychosis in rodents? By inference, since the mouse will not grab your coat and tell you about the newly appeared hypnotizing wall pattern and the like. Basically, it was observed that psychotic people have a tendency to walk in a disorganized manner when given the opportunity to explore, a behavior that was also observed in rodents on amphetamines. This disorganized walk can be quantifies into an entropic index, which is thought to reflect occurrence of psychosis (I know, a lot of inferring. But you come up with a better model of psychosis in rodent!).

Manning et al. (2015) gave their mice amphetamine to mimic psychosis and then observed their behavior. And the results were that the genetically engineered mice to express less BDNF showed reduced psychosis (i.e. had a lower entropic index). In conclusion, the alteration of the BDNF pathway may be responsible for the development of psychosis in methamphetamine users.

Reference: Manning EE, Halberstadt AL, & van den Buuse M. (Epub 9 Oct 2015). BDNF-Deficient Mice Show Reduced Psychosis-Related Behaviors Following Chronic Methamphetamine. International Journal of Neuropsychopharmacology, 1–5. doi: 10.1093/ijnp/pyv116. Article | FREE FULLTEXT PDF

By Neuronicus, 9 November 2015

Hope for a new migraine medication

Headache clipart. Courtesy of ClipArtHut.
Headache. Courtesy of ClipArtHut.

The best current anti-migraine medication are triptans (5-HT1B/1D receptor agonists). Because these medications are contra-indicated in patients with a variety of other diseases (cardiovascular, renal, hepatic, etc.), the search for alternative drugs continues.

The heat- and pain-sensitive TRPV1 receptors (Transient Receptor Potential Vanilloid 1) localized on the trigeminal terminals (the fifth cranial nerve) have been implicated in the production of headaches. That is, if you activate them by, say, capsaicin, the same substance that gives the chili peppers their hotness, you get headaches (you’d have to eat an awful lot of peppers to get the migraine, though). On the other hand, if you block these receptors by triptans, you alleviate the migraines. All good and well, so let’s hunt for some TRPV1 antagonists, i.e. blockers. But, as theory often doesn’t meet practice, the first two antagonists that were tried were dropped in the clinical trials for lack of efficiency.

Meents et al. (2015) are giving another try to two different TRPV1 antagonists, by their fetching names of JNJ-38893777 and JNJ-17203212, respectively. Because you cannot ask a rat if it has a headache, it is very difficult to have a rodent model for migraine. Instead, researchers focused on giving rats some inflammatory soup directly into the subarachnoid space or capsaicin directly into the carotid artery, actions which they have reasons to believe produce severe headaches and some biological changes, like increase in a certain gene expression (c-fos, if you must know) in the trigeminal brain stem complex and release of the neurotransmitter calcitonin gene-related peptide (CGRP).

JNJ-17203212 got rid of all those physiological changes in a dose-dependent manner, presumably of the migraine, too. The other drug, JNJ-38893777, was effective only on the highest dose. Give these drugs a few more tests to pass, and off to the clinical trials with them. I’m joking, it takes a lot more research than just a paper between discovery and human drug trials.

Reference: Meents JE, Hoffmann J, Chaplan SR, Neeb L, Schuh-Hofer S, Wickenden A, & Reuter U (December 2015, Epub 24 June 2015). Two TRPV1 receptor antagonists are effective in two different experimental models of migraine. The Journal of Headache and Pain. 16:57. doi: 10.1186/s10194-015-0539-z. Article | FREE FULLTEXT PDF

By Neuronicus, 8 November 2015