Epigenetics of BDNF in depression

Depression is the leading cause of disability worldwide, says the World Health Organization. The. The. I knew it was bad, but… ‘the’? More than 300 million people suffer from it worldwide and in many places fewer than 10% of these receive treatment. Lack of treatment is due to many things, from lack of access to healthcare to lack of proper diagnosis; and not in the least due to social stigma.

To complicate matters, the etiology of depression is still not fully elucidated, despite hundreds of thousand of experimental articles published out-there. Perhaps millions. But, because hundreds of thousands of experimental articles perhaps millions have been published, we know a helluva a lot about it than, say, 50 years ago. The enormous puzzle is being painstakingly assembled as we speak by scientists all over the world. I daresay we have a lot of pieces already, if not all at least 3 out of 4 corners, so we managed to build a not so foggy view of the general picture on the box lid. Here is one of the hottest pieces of the puzzle, one of those central pieces that bring the rabbit into focus.

Before I get to the rabbit, let me tell you about the corners. In the fifties people thought that depression is due to having too little neurotransmitters from the monoamine class in the brain. This thought did not arise willy-nilly, but from the observation that drugs that increase monoamine levels in the brain alleviate depression symptoms, and, correspondingly, drugs which deplete monoamines induce depression symptoms. A bit later on, the monoamine most culpable was found to be serotonin. All well and good, plenty of evidence, observational, correlational, causational, and mechanistic supporting the monoamine hypothesis of depression. But two more pieces of evidence kept nagging the researchers. The first one was that the monoamine enhancing drugs take days to weeks to start working. So, if low on serotonin is the case, then a selective serotonin reuptake inhibitor (SSRI) should elevate serotonin levels within maximum an hour of ingestion and lower symptom severity, so how come it takes weeks? The second was even more eyebrow raising: these monoamine-enhancing drugs work in about 50 % of the cases. Why not all? Or, more pragmatically put, why not most of all if the underlying cause is the same?

It took decades to address these problems. The problem of having to wait weeks until some beneficial effects of antidepressants show up has been explained away, at least partly, by issues in the serotonin regulation in the brain (e.g. autoreceptors senzitization, serotonin transporter abnormalities). As for the second problem, the most parsimonious answer is that that archeological site called DSM (Diagnostic and Statistical Manual of Mental Disorders), which psychologists, psychiatrists, and scientists all over the world have to use to make a diagnosis is nothing but a garbage bag of last century relics with little to no resemblance of this century’s understanding of the brain and its disorders. In other words, what DSM calls major depressive disorder (MDD) may as well be more than one disorder and then no wonder the antidepressants work only in half of the people diagnosed with it. As Goldberg put it in 2011, “the DSM diagnosis of major depression is made when a patient has any 5 out of 9 symptoms, several of which are opposites [emphasis added]”! He was referring to DSM-4, not that the 5 is much different. I mean, paraphrasing Goldberg, you really don’t need much of a degree other than some basic intro class in the physiology of whatever, anything really, to suspect that someone who’s sleeping a lot, gains weight, has increased appetite, appears tired or slow to others, and feels worthless might have a different cause for these symptoms than someone who has daily insomnias, lost weight recently, has decreased appetite, is hyperagitated, irritable, and feels excessive guilt. Imagine how much more understanding we would have about depression if scientists didn’t use the DSM for research. No wonder that there’s a lot of head scratching when your hypothesis, which is logically correct, paradigmatically coherent, internally consistent, flawlessly tested, turns out to be true only sometimes because your ‘depressed’ subjects are as a homogeneous group as a pack of Trail Mix.

I got sidetracked again. This time ranting against DSM. No matter, I’m back on track. So. The good thing about the work done trying to figure out how antidepressants work and psychiatrists’ minds work (DSM is written overwhelmingly by psychiatrists), scientists uncovered other things about depression. Some of the findings became clumped under the name ‘the neurotrophic hypothesis of depression’ in the early naughts. It stems from the finding that some chemicals needed by neurons for their cellular happiness are in low amount in depression. Almost two decades later, the hypothesis became mainstream theory as it explains away some other findings in depression, and is not incompatible with the monoamines’ behavior. Another piece of the puzzle found.

One of these neurotrophins is called brain-derived neurotrophic factor (BDNF), which promotes cell survival and growth. Crucially, it also regulates synaptic plasticity, without which there would be no learning and no memory. The idea is that exposure to adverse events generates stress. Stress is differently managed by different people, largely due to genetic factors. In those not so lucky at the genetic lottery (how hard they take a stressor, how they deal with it), and in those lucky enough at genetics but not so lucky in life (intense and/or many stressors hit the organism hard regardless how well you take it or how good you are at it), stress kills a lot of neurons, literally, prevents new ones from being born, and prevents the remaining ones from learning well. Including learning on how to deal with the stressors, present and future, so the next time an adverse event happens, even if it is a minor stressor, the person is way more drastically affected. in other words, stress makes you more vulnerable to stressors. One of the ways stress is doing all these is by suppressing BDNF synthesis. Without BDNF, the individual exposed to stress that is exacerbated either by genes or environment ends up unable to self-regulate mood successfully. The more that mood is not regulated, the worse the brain becomes at self-regulating because the elements required for self-regulation, which include learning from experience, are busted. And so the vicious circle continues.

Maintaining this vicious circle is the ability of stressors to change the patterns of DNA expression and, not surprisingly, one of the most common findings is that the BDNF gene is hypermethylated in depression. Hypermethylation is an epigenetic change (a change around the DNA, not in the DNA itself), meaning that the gene in question is less expressed. This means lower amounts of BDNF are produced in depression.

After this long introduction, the today’s paper is a systematic review of one of epigenetic changes in depression: methylation. The 67 articles that investigated the role of methylation in depression were too heterogeneous to make a meta-analysis out of them, so Li et al. (2019) made a systematic review.

The main finding was that, overall, depression is associated with DNA methylation modifications. Two genes stood out as being hypermethylated: our friend BDNF and SLC6A4, a gene involved in the serotonin cycle. Now the question is who causes who: is stress methylating your DNA or does your methylated DNA make you more vulnerable to stress? There’s evidence both ways. Vicious circle, as I said. I doubt that for the sufferer it matters who started it first, but for the researchers it does.

151 bdnf 5htt people - Copy

A little disclaimer: the picture I painted above offers a non-exclusive view on the causes of depression(s). There’s more. There’s always more. Gut microbes are in the picture too. And circulatory problems. And more. But the picture is more than half done, I daresay. Continuing my puzzle metaphor, we got the rabbit by the ears. Now what to do with it…

Well, one thing we can do with it, even with only half-rabbit done, is shout loud and clear that depression is a physical disease. And those who claim it can be cured by a positive attitude and blame the sufferers for not ‘trying hard enough’ or not ‘smiling more’ or not ‘being more positive’ can bloody well shut up and crawl back in the medieval cave they came from.

REFERENCES:

1. Li M, D’Arcy C, Li X, Zhang T, Joober R, & Meng X (4 Feb 2019). What do DNA methylation studies tell us about depression? A systematic review. Translational Psychiatry, 9(1):68. PMID: 30718449, PMCID: PMC6362194, DOI: 10.1038/s41398-019-0412-y. ARTICLE | FREE FULLTEXT PDF

2. Goldberg D (Oct 2011). The heterogeneity of “major depression”. World Psychiatry, 10(3):226-8. PMID: 21991283, PMCID: PMC3188778. ARTICLE | FREE FULLTEXT PDF

3. World Health Organization Depression Fact Sheet

By Neuronicus, 23 April 2019

How do you remember?

Memory processes like formation, maintenance and consolidation have been the subjects of extensive research and, as a result, we know quite a bit about them. And just when we thought that we are getting a pretty clear picture of the memory tableau and all that is left is a little bit of dusting around the edges and getting rid of the pink elephant in the middle of the room, here comes a new player that muddies the waters again.

DNA methylation. The attaching of a methyl group (CH3) to the DNA’s cytosine by a DNA methyltransferase (Dnmt) was considered until very recently a process reserved for the immature cells in helping them meet their final fate. In other words, DNA methylation plays a role in cell differentiation by suppressing gene expression. It has other roles in X-chromosome inactivation and cancer, but it was not suspected to play a role in memory until this decade.

Oliveira (2016) gives us a nice review of the role(s) of DNA methylation in memory formation and maintenance. First, we encounter the pharmacological studies that found that injecting Dnmt inhibitors in various parts of the brain in various species disrupted memory formation or maintenance. Next, we see the genetic studies, where mice Dnmt knock-downs and knock-outs also show impaired memory formation and maintenance. Finally, knowing which genes’ transcription is essential for memory, the researcher takes us through several papers that examine the DNA de novo methylation and demethylation of these genes in response to learning events and its role in alternative splicing.

Based on these here available data, the author proposes that activity induced DNA methylation serves two roles in memory: to “on the one hand, generate a primed and more permissive epigenome state that could facilitate future transcriptional responses and on the other hand, directly regulate the expression of genes that set the strength of the neuronal network connectivity, this way altering the probability of reactivation of the same network” (p. 590).

Here you go; another morsel of actual science brought to your fingertips by yours truly.

99-dna-copy

Reference: Oliveira AM (Oct 2016, Epub 15 Sep 2016). DNA methylation: a permissive mark in memory formation and maintenance. Learning & Memory,  23(10): 587-593. PMID: 27634149, DOI: 10.1101/lm.042739.116. ARTICLE

By Neuronicus, 22 September 2016

One parent’s gene better than the other’s

Not all people with the same bad genetic makeup that predisposes them to a particular disease go and develop that disease or, at any rate, not with the same severity and prognosis. The question is why? After all, they have the same genes…

Here comes a study that answers that very important question. Eloy et al. (2016) looked at the most common pediatric eye cancer (1 in 15,000) called retinoblastoma (Rb). In the hereditary form of this cancer, the disease occurs if the child carries mutant (i.e. bad) copies of the RB1 tumour suppressor gene located on chromosome 13 (13q14). These copies, called alleles, are inherited by the child from the mother or from the father. But some children with this genetic disadvantage do not develop Rb. They should, so why not?

The authors studied 57 families with Rb history. They took blood and tumour samples from the participants and then did a bunch of genetic tests: DNA, RNA, and methylation analyses.

They found out that when the RB1 gene is inherited from the mother, the child has only 9.7% chances of developing Rb, but when the gene is inherited from the father the child has only 67.5% chances of developing Rb.

The mechanism for this different outcomes may reside in the differential methylation of the gene. Methylation is a chemical process that suppresses the expression of a gene, meaning that less protein is produced from that gene. The maternal gene had less methylation, meaning that more protein was produced, which was able to offer some protection against the cancer. Seems counter-intuitive, you’d think less bad protein is a good thing, but there is a long and complicated explanation for that, which, in a very simplified form, posits that other events influence the function of the resultant protein.

Again, epigenetics seem to offer explanations for pesky genetic inheritance questions. Epigenetic processes, like DNA methylation, are modalities through which traits can be inherited that are not coded in the DNA itself.

RB - Copy

Reference: Eloy P, Dehainault C, Sefta M, Aerts I, Doz F, Cassoux N, Lumbroso le Rouic L, Stoppa-Lyonnet D, Radvanyi F, Millot GA, Gauthier-Villars M, & Houdayer C (29 Feb 2016). A Parent-of-Origin Effect Impacts the Phenotype in Low Penetrance Retinoblastoma Families Segregating the c.1981C>T/p.Arg661Trp Mutation of RB1. PLoS Genetics, 12(2):e1005888. eCollection 2016. PMID: 26925970, PMCID: PMC4771840, DOI: 10.1371/journal.pgen.1005888. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 24 July 2016