The FIRSTS: Lack of happy events in depression (2003)

My last post focused on depression and it reminded me of something that I keep telling my students and they all react with disbelief. Well, I tell them a lot of things to which they react with disbelief, to be sure, but this one I keep thinking it should not generate such disbelief. The thing is: depressed people perceive the same amount of negative events happening to them as healthy people, but far fewer positive ones. This seems to be counter-intuitive to non-professionals, who believe depressed people are sadder than normal and only see the half-empty side of the glass of life.

So I dug out the original paper who found this… finding. It’s not as old as you might think. Peeters et al. (2003) paid $30/capita to 86 people, 46 of which were diagnosed with Major Depressive Disorder and seeking treatment in a community mental health center or outpatient clinic (this is in Netherlands). None were taking antidepressants or any other drugs, except low-level anxiolytics. Each participant was given a wristwatch that beeped 10 times a day at semi-random intervals of approximately 90 min. When the watch beeped, the subjects had to complete a form within maximum 25 min answering questions about their mood, currents events, and their appraisal of those events. The experiment took 6 days, including weekend.

The results? Contrary to popular belief, people with depression “did not report more frequent negative events, although they did report fewer positive events and appraised both types of events as more stressful” (p. 208). In other words, depressed people are not seeing half-empty glasses all the time; instead, they don’t see the half-full glasses. Note that they regarded both negative and positive events as stressful. We circle back to the ‘stress is the root of all evil‘ thing.

I would have liked to see if the decrease in positive affect and perceived happy events correlates with increased sadness. The authors say that “negative events were appraised as more unpleasant, more important, and more stressful by the depressed than by the healthy participants ” (p. 206), but, curiously, the  mood was assessed with ratings on the feeling anxious, irritated, restless, tense, guilty, irritable, easily distracted, and agitate, and not a single item on depression-iconic feelings: sad, empty, hopeless, worthless.

Nevertheless, it’s a good psychological study with in depth statistical analyses. I also found thought-provoking this paragraph: “The literature on mood changes in daily life is dominated by studies of daily hassles. The current results indicate that daily uplifts are also important determinants of mood, in both depressed and healthy people” (p. 209).

152 depression and lack of happy events - Copy

REFERENCE: Peeters F, Nicolson NA, Berkhof J, Delespaul P, & deVries M. (May 2003). Effects of daily events on mood states in major depressive disorder. Journal of Abnormal Psychology, 112(2):203-11. PMID: 12784829, DOI: 10.1037/0021-843X.112.2.203. ARTICLE

By Neuronicus, 4 May 2019

Epigenetics of BDNF in depression

Depression is the leading cause of disability worldwide, says the World Health Organization. The. The. I knew it was bad, but… ‘the’? More than 300 million people suffer from it worldwide and in many places fewer than 10% of these receive treatment. Lack of treatment is due to many things, from lack of access to healthcare to lack of proper diagnosis; and not in the least due to social stigma.

To complicate matters, the etiology of depression is still not fully elucidated, despite hundreds of thousand of experimental articles published out-there. Perhaps millions. But, because hundreds of thousands of experimental articles perhaps millions have been published, we know a helluva a lot about it than, say, 50 years ago. The enormous puzzle is being painstakingly assembled as we speak by scientists all over the world. I daresay we have a lot of pieces already, if not all at least 3 out of 4 corners, so we managed to build a not so foggy view of the general picture on the box lid. Here is one of the hottest pieces of the puzzle, one of those central pieces that bring the rabbit into focus.

Before I get to the rabbit, let me tell you about the corners. In the fifties people thought that depression is due to having too little neurotransmitters from the monoamine class in the brain. This thought did not arise willy-nilly, but from the observation that drugs that increase monoamine levels in the brain alleviate depression symptoms, and, correspondingly, drugs which deplete monoamines induce depression symptoms. A bit later on, the monoamine most culpable was found to be serotonin. All well and good, plenty of evidence, observational, correlational, causational, and mechanistic supporting the monoamine hypothesis of depression. But two more pieces of evidence kept nagging the researchers. The first one was that the monoamine enhancing drugs take days to weeks to start working. So, if low on serotonin is the case, then a selective serotonin reuptake inhibitor (SSRI) should elevate serotonin levels within maximum an hour of ingestion and lower symptom severity, so how come it takes weeks? The second was even more eyebrow raising: these monoamine-enhancing drugs work in about 50 % of the cases. Why not all? Or, more pragmatically put, why not most of all if the underlying cause is the same?

It took decades to address these problems. The problem of having to wait weeks until some beneficial effects of antidepressants show up has been explained away, at least partly, by issues in the serotonin regulation in the brain (e.g. autoreceptors senzitization, serotonin transporter abnormalities). As for the second problem, the most parsimonious answer is that that archeological site called DSM (Diagnostic and Statistical Manual of Mental Disorders), which psychologists, psychiatrists, and scientists all over the world have to use to make a diagnosis is nothing but a garbage bag of last century relics with little to no resemblance of this century’s understanding of the brain and its disorders. In other words, what DSM calls major depressive disorder (MDD) may as well be more than one disorder and then no wonder the antidepressants work only in half of the people diagnosed with it. As Goldberg put it in 2011, “the DSM diagnosis of major depression is made when a patient has any 5 out of 9 symptoms, several of which are opposites [emphasis added]”! He was referring to DSM-4, not that the 5 is much different. I mean, paraphrasing Goldberg, you really don’t need much of a degree other than some basic intro class in the physiology of whatever, anything really, to suspect that someone who’s sleeping a lot, gains weight, has increased appetite, appears tired or slow to others, and feels worthless might have a different cause for these symptoms than someone who has daily insomnias, lost weight recently, has decreased appetite, is hyperagitated, irritable, and feels excessive guilt. Imagine how much more understanding we would have about depression if scientists didn’t use the DSM for research. No wonder that there’s a lot of head scratching when your hypothesis, which is logically correct, paradigmatically coherent, internally consistent, flawlessly tested, turns out to be correct only sometimes because you’re ‘depressed’ subjects are as a homogeneous group as a pack of Trail Mix.

I got sidetracked again. This time ranting against DSM. No matter, I’m back on track. So. The good thing about the work done trying to figure out how antidepressants work and psychiatrists’ minds work (DSM is written overwhelmingly by psychiatrists), scientists uncovered other things about depression. Some of the findings became clumped under the name ‘the neurotrophic hypothesis of depression’ in the early naughts. It stems from the finding that some chemicals needed by neurons for their cellular happiness are in low amount in depression. Almost two decades later, the hypothesis became mainstream theory as it explains away some other findings in depression, and is not incompatible with the monoamines’ behavior. Another piece of the puzzle found.

One of these neurotrophins is called brain-derived neurotrophic factor (BDNF), which promotes cell survival and growth. Crucially, it also regulates synaptic plasticity, without which there would be no learning and no memory. The idea is that exposure to adverse events generates stress. Stress is differently managed by different people, largely due to genetic factors. In those not so lucky at the genetic lottery (how hard they take a stressor, how they deal with it), and in those lucky enough at genetics but not so lucky in life (intense and/or many stressors hit the organism hard regardless how well you take it or how good you are at it), stress kills a lot of neurons, literally, prevents new ones from being born, and prevents the remaining ones from learning well. Including learning on how to deal with the stressors, present and future, so the next time an adverse event happens, even if it is a minor stressor, the person is way more drastically affected. in other words, stress makes you more vulnerable to stressors. One of the ways stress is doing all these is by suppressing BDNF synthesis. Without BDNF, the individual exposed to stress that is exacerbated either by genes or environment ends up unable to self-regulate mood successfully. The more that mood is not regulated, the worse the brain becomes at self-regulating because the elements required for self-regulation, which include learning from experience, are busted. And so the vicious circle continues.

Maintaining this vicious circle is the ability of stressors to change the patterns of DNA expression and, not surprisingly, one of the most common findings is that the BDNF gene is hypermethylated in depression. Hypermethylation is an epigenetic change (a change around the DNA, not in the DNA itself), meaning that the gene in question is less expressed. This means lower amounts of BDNF are produced in depression.

After this long introduction, the today’s paper is a systematic review of one of epigenetic changes in depression: methylation. The 67 articles that investigated the role of methylation in depression were too heterogeneous to make a meta-analysis out of them, so Li et al. (2019) made a systematic review.

The main finding was that, overall, depression is associated with DNA methylation modifications. Two genes stood out as being hypermethylated: our friend BDNF and SLC6A4, a gene involved in the serotonin cycle. Now the question is who causes who: is stress methylating your DNA or does your methylated DNA make you more vulnerable to stress? There’s evidence both ways. Vicious circle, as I said. I doubt that for the sufferer it matters who started it first, but for the researchers it does.

151 bdnf 5htt people - Copy

A little disclaimer: the picture I painted above offers a non-exclusive view on the causes of depression(s). There’s more. There’s always more. Gut microbes are in the picture too. And circulatory problems. And more. But the picture is more than half done, I daresay. Continuing my puzzle metaphor, we got the rabbit by the ears. Now what to do with it…

Well, one thing we can do with it, even with only half-rabbit done, is shout loud and clear that depression is a physical disease. And those who claim it can be cured by a positive attitude and blame the sufferers for not ‘trying hard enough’ or not ‘smiling more’ or not ‘being more positive’ can bloody well shut up and crawl back in the medieval cave they came from.

REFERENCES:

1. Li M, D’Arcy C, Li X, Zhang T, Joober R, & Meng X (4 Feb 2019). What do DNA methylation studies tell us about depression? A systematic review. Translational Psychiatry, 9(1):68. PMID: 30718449, PMCID: PMC6362194, DOI: 10.1038/s41398-019-0412-y. ARTICLE | FREE FULLTEXT PDF

2. Goldberg D (Oct 2011). The heterogeneity of “major depression”. World Psychiatry, 10(3):226-8. PMID: 21991283, PMCID: PMC3188778. ARTICLE | FREE FULLTEXT PDF

3. World Health Organization Depression Fact Sheet

By Neuronicus, 23 April 2019

High fructose corn syrup IS bad for you

Because I cannot leave controversial things well enough alone – at least not when I know there shouldn’t be any controversy – my ears caught up with my tongue yesterday when the latter sputtered: “There is strong evidence for eliminating sugar from commonly used food products like bread, cereal, cans, drinks, and so on, particularly against that awful high fructose corn syrup”. “Yeah? You “researched” that up, haven’t you? Google is your bosom friend, ain’t it?” was the swift reply. Well, if you get rid of the ultra-emphatic air-quotes flanking the word ‘researched’ and replace ‘Google’ with ‘Pubmed’, then, yes, I did researched it and yes, Pubmed is my bosom friend.

Initially, I wanted to just give you all a list with peer-reviewed papers that found causal and/or correlational links between high fructose corn syrup (HFCS) and weight gain, obesity, type 2 diabetes, cardiovascular disease, fatty liver disease, metabolic and endocrine anomalies and so on. But there are way too many of them; there are over 500 papers on the subject in Pubmed only. And most of them did find that HFCS does nasty stuff to you, look for yourselves here. Then I thought to feature a paper showing that HFCS is differently metabolized than the fructose from fruits, because I keep hearing that lie perpetrated by the sugar and corn industries that “sugar is sugar” (no, it’s not! Demonstrably so!), but I doubt my yesterday’s interlocutor would care about liver’s enzymatic activity and other chemical processes with lots of acronyms. So, finally, I decided to feature a straight forward, no-nonsense paper, published recently, done at a top tier university, with human subjects, so I won’t hear any squabbles.

Price et al. (2018) studied 49 healthy subjects aged age 18–40 yr, of normal and stable body weight, and free from confounding medications or drugs, whose physical activity and energy-balanced meals were closely monitored. During the study, the subjects’ food and drink intake as well as their timing were rigorously controlled. The researchers varied only the beverages between groups, in such a way that one group received a drink sweetened with HFCS-55 (55% fructose, 45% glucose, as the one used in commercially available drinks) with every controlled meal, whereas the other group received an identical drink in size (adjusted for their energy requirements in such a way that it provided the same 25% of it), but sweetened with aspartame. The study lasted two weeks. No other beverage was allowed, including fruit juice. Urine samples were collected daily and blood samples 4 times per day.

There was a body weight increase of 810 grams (1.8 lb) in subjects consuming HFCS-sweetened beverages for 2 weeks when compared with aspartame controls. The researches also found differences in the levels of a whole host of acronyms (ppTG, ApoCIII, ApoE, OEA, DHEA, DHG, if you must know) involved in a variety of nasty things, like obesity, fatty liver disease, atherosclerosis, cardiovascular disease, stroke, diabetes, even Alzheimer’s.

This study is the third part of a larger NIH-funded study which investigates the metabolic effects of consuming sugar-sweetened beverages in about 200 participants over 5 years, registered at clinicaltrials.gov as NCT01103921. The first part (Stanhope et al., 2009) reported that consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans” (title), and the second part (Stanhope et al., 2015) found that “consuming beverages containing 10%, 17.5%, or 25% of energy requirements from HFCS produced dose-dependent increases in circulating lipid/lipoprotein risk factors for cardiovascular disease and uric acid within 2 weeks” (Abstract). They also found a dose-dependant increase in body weight, but in those subjects the results were not statistically significant (p = 0.09) after correcting for multiple comparisons. But I’ll bet that if/when the authors will publish all the data in one paper at the end of clinical trials they will have more statistical power and the trend in weight gain more obvious, as in the present paper.  Besides, it looks like there may be more than three parts to this study anyway.

The adverse effects of a high sugar diet, particularly in HFCS, are known to so many researchers in the field that they have been actually compiled in a name: the “American Lifestyle-Induced Obesity Syndrome model, which included consumption of a high-fructose corn syrup in amounts relevant to that consumed by some Americans” (Basaranoglu et al., 2013). It doesn’t refer only to increases in body weight, but also type 2 diabetes, cardiovascular disease, hypertriglyceridemia, fatty liver disease, atherosclerosis, gout, etc.

The truly sad part is that avoiding added sugars in diets in USA is impossible unless you do all – and I mean all – your cooking home, including canning, jamming, bread-making, condiment-making and so on, not just “Oh, I’ll cook some chicken or ham tonight” because in that case you end up using canned tomato sauce (which has added sugar), bread crumbs (which have added sugar), or ham (which has added sugar), salad dressing (which has sugar) and so on. Go on, check your kitchen and see how many ingredients have sugar in them, including any meat products short of raw meat. If you never read the backs of the bottles, cans, or packages, oh my, are you in for a big surprise if you live in USA…

There are lot more studies out there on the subject, as I said, of various levels of reading difficulty. This paper is not easy to read for someone outside the field, that’s for sure. But the main gist of it is in the abstract, for all to see.

150 hfcs - Copy

P.S. 1. Please don’t get me wrong: I am not against sugar in desserts, let it be clear. Nobody makes a more mean sweetalicious chocolate cake or carbolicious blueberry muffin than me (both terms coined by me!), as I have been reassured many times. But I am against sugar in everything. You know I haven’t found in any store, including high-end and really high-end stores a single box of cereal of any kind without sugar? Just for fun, I’d like to be a daredevil and try it once. But there ain’t. Not in USA, anyway. I did find them in EU though. But I cannot keep flying over the Atlantic in the already crammed at premium luggage space unsweetened corn flakes from Europe which are probably made locally, incidentally and ironically, with good old American corn.

P.S. 2 I am not so naive, blind, or zealous to overlook the studies that did not find any deleterious effects of HFCS consumption. Actually, I was on the fence about HFCS until about 10 years ago when the majority of papers (now overwhelming majority) was showing that HFCS consumption not only increases weight gain, but it can also lead to more serious problems like the ones mentioned above. Or the few papers that say all added sugar is bad, but HFCS doesn’t stand out from the other sugars when it comes to disease or weight gain. But, like with most scientific things, the majority has it its way and I bow to it democratically until the new paradigm shift. Besides, the exposés of Kearns et al. (2016a, b, 2017) showing in detail and with serious documentation how the sugar industry paid prominent researchers for the past 50 years to hide the deleterious effects of added sugar (including cancer!) further cemented my opinion about added sugar in foods, particularly HFCS.

References:

  1. Price CA, Argueta DA, Medici V, Bremer AA, Lee V, Nunez MV, Chen GX, Keim NL, Havel PJ, Stanhope KL, & DiPatrizio NV (1 Aug 2018, Epub 10 Apr 2018). Plasma fatty acid ethanolamides are associated with postprandial triglycerides, ApoCIII, and ApoE in humans consuming a high-fructose corn syrup-sweetened beverage. American Journal of Physiology. Endocrinology and Metabolism, 315(2): E141-E149. PMID: 29634315, PMCID: PMC6335011 [Available on 2019-08-01], DOI: 10.1152/ajpendo.00406.2017. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Medici V2, Bremer AA2, Lee V2, Lam HD2, Nunez MV2, Chen GX2, Keim NL2, Havel PJ (Jun 2015, Epub 22 Apr 2015). A dose-response study of consuming high-fructose corn syrup-sweetened beverages on lipid/lipoprotein risk factors for cardiovascular disease in young adults. The American Journal of Clinical Nutrition, 101(6):1144-54. PMID: 25904601, PMCID: PMC4441807, DOI: 10.3945/ajcn.114.100461. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Schwarz JM, Keim NL, Griffen SC, Bremer AA, Graham JL, Hatcher B, Cox CL, Dyachenko A, Zhang W, McGahan JP, Seibert A, Krauss RM, Chiu S, Schaefer EJ, Ai M, Otokozawa S, Nakajima K, Nakano T, Beysen C, Hellerstein MK, Berglund L, Havel PJ (May 2009, Epub 20 Apr 2009). Consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans. The Journal of Clinical Investigation,119(5):1322-34. PMID: 19381015, PMCID: PMC2673878, DOI:10.1172/JCI37385. ARTICLE | FREE FULTEXT PDF

(Very) Selected Bibliography:

Bocarsly ME, Powell ES, Avena NM, Hoebel BG. (Nov 2010, Epub 26 Feb 2010). High-fructose corn syrup causes characteristics of obesity in rats: increased body weight, body fat and triglyceride levels. Pharmacology, Biochemistry, and Behavior, 97(1):101-6. PMID: 20219526, PMCID: PMC3522469, DOI: 10.1016/j.pbb.2010.02.012. ARTICLE | FREE FULLTEXT PDF

Kearns CE, Apollonio D, Glantz SA (21 Nov 2017). Sugar industry sponsorship of germ-free rodent studies linking sucrose to hyperlipidemia and cancer: An historical analysis of internal documents. PLoS Biology, 15(11):e2003460. PMID: 29161267, PMCID: PMC5697802, DOI: 10.1371/journal.pbio.2003460. ARTICLE | FREE FULTEXT PDF

Kearns CE, Schmidt LA, Glantz SA (1 Nov 2016). Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents. JAMA Internal Medicine, 176(11):1680-1685. PMID: 27617709, PMCID: PMC5099084, DOI: 10.1001/jamainternmed.2016.5394. ARTICLE | FREE FULTEXT PDF

Mandrioli D, Kearns CE, Bero LA (8 Sep 2016). Relationship between Research Outcomes and Risk of Bias, Study Sponsorship, and Author Financial Conflicts of Interest in Reviews of the Effects of Artificially Sweetened Beverages on Weight Outcomes: A Systematic Review of Reviews. PLoS One, 11(9):e0162198.PMID: 27606602, PMCID: PMC5015869, DOI: 10.1371/journal.pone.0162198. ARTICLE | FREE FULTEXT PDF

By Neuronicus, 22 March 2019

Yet another experiment showing that conscious “decisions” are made unconsciously, and in advance

It’s a well done summary of a newer paper on the lines of Libet’s 1983 that I covered in “The FIRSTS: brain active before conscious intent (1983)

Why Evolution Is True

In the last few years, neuroscience experiments have shown that some “conscious decisions” are actually made in the brain before the actor is conscious of them:  brain-scanning techniques can predict not only when a binary decision will be made, but what it will be (with accuracy between 55-70%)—several seconds before the actor reports being conscious of having made a decision.  The implications of this research are obvious: by the time we’re conscious of having made a “choice”, that choice has already been made for us—by our genes and our environments—and the consciousness is merely reporting something determined beforehand in the brain.  And that, in turn, suggests (as I’ve mentioned many times here) that all of our “choices” are really determined in advance, though some choices (e.g., whether to duck when a baseball is thrown at your head) can’t be made very far in advance!

Most readers here accept that our…

View original post 1,601 more words

Love and the immune system

Valentine’s day is a day when we celebrate romantic love (well, some of us tend to) long before the famous greeting card company Hallmark was established. Fittingly, I found the perfect paper to cover for this occasion.

In the past couple of decades it became clear to scientists that there is no such thing as a mental experience that doesn’t have corresponding physical changes. Why should falling in love be any different? Several groups have already found that levels of some chemicals (oxytocin, cortisol, testosterone, nerve growth factor, etc.) change when we fall in love. There might be other changes as well. So Murray et al. (2019) decided to dive right into it and check how the immune system responds to love, if at all.

For two years, the researchers looked at certain markers in the immune system of 47 women aged 20 or so. They drew blood when the women reported to be “not in love (but in a new romantic relationship), newly in love, and out-of-love” (p. 6). Then they sent their samples to their university’s Core to toil over microarrays. Microarray techniques can be quickly summarized thusly: get a bunch of molecules of interest, in this case bits of single-stranded DNA, and stick them on a silicon plate or a glass slide in a specific order. Then you run your sample over it and what sticks, sticks, what not, not. Remember that DNA loves to be double stranded, so any single strand will stick to their counterpart, called complementary DNA. You put some fluorescent dye on your genes of interest and voilà, here you have an array of genes expressed in a certain type of tissue in a certain condition.

Talking about microarrays got me a bit on memory lane. When fMRI started to be a “must” in neuroscience, there followed a period when the science “market” was flooded by “salad” papers. We called them that because there were so many parts of the brain reported as “lit up” in a certain task that it made a veritable “salad of brain parts” out of which it was very difficult to figure out what’s going on. I swear that now that the fMRI field matured a bit and learned how to correct for multiple comparisons as well as to use some other fancy stats, the place of honor in the vegetable mix analogy has been relinquished to the ‘-omics’ studies. In other words, a big portion of the whole-genome or transcriptome studies became “salad” studies: too many things show up as statistically significant to make head or tail of it.

However, Murray et al. (2019) made a valiant – and successful – effort to figure out what those up- or down- regulated 61 gene transcripts in the immune system cells of 17 women falling in love actually mean. There’s quite a bit I am leaving out but, in a nutshell, love upregulated (that is “increased”) the expressions of genes involved in the innate immunity to viruses, presumably to facilitate sexual reproduction, the authors say.

The paper is well written and the authors graciously remind us that there are some limitations to the study. Nevertheless, this is another fine addition to the unbelievably fast growing body of knowledge regarding human body and behavior.

Shame that this research was done only with women. I would have loved to see how men’s immune systems respond to falling in love.

149-love antiviral - Copy.jpg

REFERENCE: Murray DR, Haselton MG, Fales M, & Cole SW. (Feb 2019, Epub 2 Oct 2018). Falling in love is associated with immune system gene regulation. Psychoneuroendocrinology, Vol. 100, Pg. 120-126. doi: 10.1016/j.psyneuen.2018.09.043. PMID: 30299259, PMCID: PMC6333523 [Available on 2020-02-01], DOI: 10.1016/j.psyneuen.2018.09.043 ARTICLE

FYI: PMC6333523 [Available on 2020-02-01] means that the fulltext will be available for free to the public one year after the publication on the US governmental website PubMed (https://www.ncbi.nlm.nih.gov/pubmed/), no matter how much Elsevier will charge for it. Always, always, check the PMC library (https://www.ncbi.nlm.nih.gov/pmc/) on PubMed to see if a paper you saw in Nature or Elsevier is for free there because more often than you’d think it is.

PubMed = the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM), comprising of “more than 29 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites” .

PMC = “PubMed Central® (PMC) is a free fulltext archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM)” with a whooping fulltext library of over 5 million papers and growing rapidly. Love PubMed!

By Neuronicus, 14 February 2019

Milk-producing spider

In biology, organizing living things in categories is called taxonomy. Such categories are established based on shared characteristics of the members. These characteristics were usually visual attributes. For example, a red-footed booby (it’s a bird, silly!) is obviously different than a blue-footed booby, so we put them in different categories, which Aristotle called in Greek something like species.

Biological taxonomy is very useful, not only to provide countless hours of fight (both verbal and physical!) for biologists, but to inform us of all sorts of unexpected relationships between living things. These relationships, in turn, can give us insights into our own evolution, but also the evolution of things inimical to us, like diseases, and, perhaps, their cure. Also extremely important, it allows scientists from all over the world to have a common language, thus maximizing information sharing and minimizing misunderstandings.

148-who Am I - Copy

All well and good. And it was all well and good since Carl Linnaeus introduced his famous taxonomy system in the 18th Century, the one we still use today with species, genus, family, order, and kingdom. Then we figured out how to map the DNAs of things around us and this information threw out the window a lot of Linnean classifications. Because it turns out that some things that look similar are not genetically similar; likewise, some living things that we thought are very different from one another, turned out that, genetically speaking, they are not so different.

You will say, then, alright, out with visual taxonomy, in with phylogenetic taxonomy. This would be absolutely peachy for a minority of organisms of the planet, like animals and plants, but a nightmare in the more promiscuous organisms who have no problem swapping bits of DNA back and forth, like some bacteria, so you don’t know anymore who’s who. And don’t even get me started on the viruses which we are still trying to figure out whether or not they are alive in the first place.

When I grew up there were 5 regna or kingdoms in our tree of life – Monera, Protista, Fungi, Plantae, Animalia – each with very distinctive characteristics. Likewise, the class Mammalia from the Animal Kingdom was characterized by the females feeding their offspring with milk from mammary glands. Period. No confusion. But now I have no idea (nor do many other biologists, rest assured) how many domains or kingdoms or empires we have, nor even what the definition of a species is anymore.

As if that’s not enough, even those Linnean characteristics that we thought set in stone are amenable to change. Which is good, shows the progress of science. But I didn’t think that something like the definition of mammal would change. Mammals are organisms whose females feed their offspring with milk from mammary glands, as I vouchsafed above. Pretty straightforward. And not spiders. Let me be clear on this: spiders did not feature in my – or anyone’s! – definition of mammals.

Until Chen et al. (2018) published their weird article a couple of weeks ago. The abstract is free for all to see and states that the females of a jumping spider species feed their young with milk secreted by their body until the age of subadulthood. Mothers continue to offer parental care past the maturity threshold. The milk is necessary for the spiderlings because without it they die. That’s all.

I read the whole paper since it was only 4 pages of it and here are some more details about their discovery. The species of spider they looked at is Toxeus magnus, a jumping spider that looks like an ant. The mother produces milk from her epigastric furrow and deposits it on the nest floor and walls from where the spiderlings ingest it (0-7 days). After the first week of this, the spiderlings suck the milk direct from the mother’s body and continue to do so for the next two weeks (7-20 days) when they start leaving the nest and forage for themselves. But they return and for the next period (20-40 days) they get their food both from the mother’s milk and from independent foraging. Spiderlings get weaned by day 40, but they still come home to sleep at night. At day 52 they are officially considered adults. Interestingly, “although the mother apparently treated all juveniles the same, only daughters were allowed to return to the breeding nest after sexual maturity. Adult sons were attacked if they tried to return. This may reduce inbreeding depression, which is considered to be a major selective agent for the evolution of mating systems (p. 1053).”

During all this time, including during the emergence into adulthood of the offsprings, the mother also supplied house maintenance, carrying out her children’s exuviae (shed exoskeletons) and repairing the nest.

The authors then did a series of experiments to see what role does the nursing and other maternal care at different stages play in the fitness and survival of the offsprings. Blocking the mother’s milk production with correction fluid immediately after hatching killed all the spiderlings, showing that they are completely dependent on the mother’s milk. Removing the mother after the spiderlings start foraging (day 20) drastically reduces survivorship and body size, showing that mother’s care is essential for her offsprings’ success. Moreover, the mother taking care of the nest and keeping it clean reduced the occurrence of parasite infections on the juveniles.

The authors analyzed the milk and it’s highly nutritious: “spider milk total sugar content was 2.0 mg/ml, total fat 5.3 mg/ml, and total protein 123.9 mg/ml, with the protein content around four times that of cow’s milk (p. 1053)”.

Speechless I am. Good for the spider, I guess. Spider milk will have exorbitant costs (Apparently, a slight finger pressure on the milk-secreting region makes the mother spider secret the milk, not at all unlike the human mother). Spiderlings die without the mother’s milk. Responsible farming? Spider milker qualifications? I’m gonna lay down, I got a headache.

148 spider milk - Copy

REFERENCE: Chen Z, Corlett RT, Jiao X, Liu SJ, Charles-Dominique T, Zhang S, Li H, Lai R, Long C, & Quan RC (30 Nov. 2018). Prolonged milk provisioning in a jumping spider. Science, 362(6418):1052-1055. PMID: 30498127, DOI: 10.1126/science.aat3692. ARTICLE | Supplemental info (check out the videos)

By Neuronicus, 13 December 2018

Pic of the day: Dopamine from a non-dopamine place

147 lc da ppvn - Copy

Reference: Beas BS, Wright BJ, Skirzewski M, Leng Y, Hyun JH, Koita O, Ringelberg N, Kwon HB, Buonanno A, & Penzo MA (Jul 2018, Epub 18 Jun 2018). The locus coeruleus drives disinhibition in the midline thalamus via a dopaminergic mechanism. Nature Neuroscience,21(7):963-973. PMID: 29915192, PMCID: PMC6035776 [Available on 2018-12-18], DOI:10.1038/s41593-018-0167-4. ARTICLE

Pooping Legos

Yeah, alright… uhm… how exactly should I approach this paper? I’d better just dive into it (oh boy! I shouldn’t have said that).

The authors of this paper were adult health-care professionals in the pediatric field. These three males and three females were also the participants in the study. They kept a poop-diary noting the frequency and volume of bowel movements (Did they poop directly on a scale or did they have to scoop it out in a bag?). The researchers/subjects developed a Stool Hardness and Transit (SHAT) metric to… um.. “standardize bowel habit between participants” (p. 1). In other words, to put the participants’ bowel movements on the same level (please, no need to visualize, I am still stuck at the poop-on-a-scale phase), the authors looked – quite literally – at the consistency of the poop and gave it a rating. I wonder if they checked for inter-rater reliability… meaning did they check each other’s poops?…

Then the researchers/subjects ingested a Lego figurine head, on purpose, somewhere between 7 and 9 a.m. Then they timed how much time it took to exit. The FART score (Found and Retrieved Time) was 1.71 days. “There was some evidence that females may be more accomplished at searching through their stools than males, but this could not be statistically validated” due to the small sample size, if not the poops’. It took 1 to 3 stools for the object to be found, although poor subject B had to search through his 13 stools over a period of 2 weeks to no avail. I suppose that’s what you get if you miss the target, even if you have a PhD.

The pre-SHAT and SHAT score of the participants did not differ, suggesting that the Lego head did not alter the poop consistency (I got nothin’ here; the authors’ acronyms are sufficient scatological allusion). From a statistical standpoint, the one who couldn’t find his head in his poop (!) should not have been included in the pre-SHAT score group. Serves him right.

I wonder how they searched through the poop… A knife? A sieve? A squashing spatula? Gloved hands? Were they floaters or did the poop sink at the base of the toilet? Then how was it retrieved? Did the researchers have to poop in a bucket so no loss of data should occur? Upon direct experimentation 1 minute ago, I vouchsafe that a Lego head is completely buoyant. Would that affect the floatability of the stool in question? That’s what I’d like to know. Although, to be fair, no, that’s not what I want to know; what I desire the most is a far larger sample size so some serious stats can be conducted. With different Lego parts. So they can poop bricks. Or, as suggested by the authors, “one study arm including swallowing a Lego figurine holding a coin” (p. 3) so one can draw parallels between Lego ingestion and coin ingestion research, the latter being, apparently, far more prevalent. So many questions that still need to be answered! More research is needed, if only grants would be so… regular as the raw data.

The paper, albeit short and to the point, fills a gap in our scatological knowledge database (Oh dear Lord, stop me!). The aim of the paper was to show that ingested objects by children tend to pass without a problem. Also of value, the paper asks pediatricians to counsel the parents to not search for the object in the faeces to prove object retrieval because “if an experienced clinician with a PhD is unable to adequately find objects in their own stool, it seems clear that we should not be expecting parents to do so” (p. 3). Seems fair.

146 lego poop - Copy

REFERENCE: Tagg, A., Roland, D., Leo, G. S., Knight, K., Goldstein, H., Davis, T. and Don’t Forget The Bubbles (22 November 2018). Everything is awesome: Don’t forget the Lego. Journal of Paediatrics and Child Health, doi: 10.1111/jpc.14309. ARTICLE

By Neuronicus, 27 November 2017

Apathy

Le Heron et al. (2018) defines apathy as a marked reduction in goal-directed behavior. But in order to move, one must be motivated to do so. Therefore, a generalized form of impaired motivation also hallmarks apathy.

The authors compiled for us a nice mini-review combing through the literature of motivation in order to identify, if possible, the neurobiological mechanism(s) of apathy. First, they go very succinctly though the neuroscience of motivated behavior. Very succinctly, because there are literally hundreds of thousands of worthwhile pages out there on this subject. Although there are several other models proposed out-there, the authors’ new model on motivation includes the usual suspects (dopamine, striatum, prefrontal cortex, anterior cingulate cortex) and you can see it in the Fig. 1.

145 apathy 1 - Copy
Fig. 1 from Le Heron et al. (2018). The red underlining is mine because I really liked how well and succinctly the authors put a universal truth about the brain: “A single brain region likely contributes to more than one process, but with specialisation”. © Author(s) (or their employer(s)) 2018.

After this intro, the authors go on to showcasing findings from the effort-based decision-making field, which suggest that the dopamine-producing neurons from ventral tegmental area (VTA) are fundamental in choosing an action that requires high-effort for high-reward versus a low-effort for low-reward. Contrary to what Wikipedia tells you, a reduction, not an increase, in mesolimbic dopamine is associated with apathy, i.e. preferring a low-effort for low-reward activity.

Next, the authors focus on why are the apathetic… apathetic? Basically, they asked the question: “For the apathetic, is the reward too little or is the effort too high?” By looking at some cleverly designed experiments destined to parse out sensitivity to reward versus sensitivity to effort costs, the authors conclude that the apathetics are indeed sensitive to the reward, meaning they don’t find the rewards good enough for them to move.  Therefore, the answer is the reward is too little.

In a nutshell, apathetic people think “It’s not worth it, so I’m not willing to put in the effort to get it”. But if somehow they are made to judge the reward as good enough, to think “it’s worth it”, they are willing to work their darndest to get it, like everybody else.

The application of this is that in order to get people off the couch and do stuff you have to present them a reward that they consider worth moving for, in other words to motivate them. To which any practicing psychologist or counselor would say: “Duh! We’ve been saying that for ages. Glad that neuroscience finally caught up”.  Because it’s easy to say people need to get motivated, but much much harder to figure out how.

This was a difficult write for me and even I recognize the quality of this blogpost as crappy. That’s because, more or less, this paper is within my narrow specialization field. There are points where I disagree with the authors (some definitions of terms), there are points where things are way more nuanced than presented (dopamine findings in reward), and finally there are personal preferences (the interpretation of data from Parkinson’s disease studies). Plus, Salamone (the second-to-last author) is a big name in dopamine research, meaning I’m familiar with his past 20 years or so worth of publications, so I can infer certain salient implications (one dopamine hypothesis is about saliency, get it?).

It’s an interesting paper, but it’s definitely written for the specialist. Hurray (or boo, whatever would be your preference) for another model of dopamine function(s).

REFERENCE: Le Heron C, Holroyd CB, Salamone J, & Husain M (26 Oct 2018, Epub ahead of print). Brain mechanisms underlying apathy. Journal of Neurology, Neurosurgery & Psychiatry. pii: jnnp-2018-318265. doi: 10.1136/jnnp-2018-318265. PMID: 30366958 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 24 November 2018

No licorice for you

I never liked licorice. And that turns out to be a good thing. Given that Halloween just happened yesterday and licorice candy is still sold in USA, I remembered the FDA’s warning against consumption of licorice from a year ago.

So I dug out the data supporting this recommendation. It’s a review paper published 6 years ago by Omar et al. (2012) meant to raise awareness of the risks of licorice consumption and to urge FDA to take regulatory steps.

The active ingredient in licorice is glycyrrhizic acid. This is hydrolyzed to glycyrrhetic acid by intestinal bacteria possessing a specialized ß-glucuronidase. Glycyrrhetic acid, in turn, inhibits 11-ß-hydroxysteroid dehydrogenase (11-ß-HSD) which results in cortisol activity increase, which binds to the mineralcorticoid receptors in the kidneys, leading to low potassium levels (called hypokalemia). Additionally, licorice components can also bind directly to the mineralcorticoid receptors.

Eating 2 ounces of black licorice a day for at least two weeks (which is roughly equivalent to 2 mg/kg/day of pure glycyrrhizinic acid) is enough to produce disturbances in the following systems:

  • cardiovascular (hypertension, arrhythmias, heart failure, edemas)
  • neurological (stroke, myoclonia, ocular deficits, Carpal tunnel, muscle weakness)
  • renal (low potassium, myoglobinuria, alkalosis)
  • and others

144 licorice - Copy

Although everybody is affected by licorice consumption, the most vulnerable populations are those over 40 years old, those who don’t poop every day, or are hypertensive, anorexic or of the female persuasion.

Unfortunately, even if one doesn’t enjoy licorice candy, they still can consume it as it is used as a sweetener or flavoring agent in many foods, like sodas and snacks. It is also used in naturopathic crap, herbal remedies, and other dangerous scams of that ilk. So beware of licorice and read the label, assuming the makers label it.

144 licorice products - Copy
Licorice products (Images: PD, Collage: Neuronicus)

REFERENCE: Omar HR, Komarova I, El-Ghonemi M, Fathy A, Rashad R, Abdelmalak HD, Yerramadha MR, Ali Y, Helal E, & Camporesi EM. (Aug 2012). Licorice abuse: time to send a warning message. Therapeutic Advances in Endocrinology and Metabolism, 3(4):125-38. PMID: 23185686, PMCID: PMC3498851, DOI: 10.1177/2042018812454322. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 November 2018