How many people do doctors kill?

158 medical error - Copy

The authors define medical error as “death due to

1) an error in judgment, skill, or coordination of care,

2) a diagnostic error,

3) a system defect resulting in death or a failure to rescue a patient from death, or

4) a preventable adverse event.” (Letter to CDC by Makary et al., 2016)

I reproduced the authors’ definition because there is a hot debate in the medicine field as to what constitutes a medical error and what is preventable vs. unpreventable. It might seem clear cut to you and me, but after I perused a few papers from both sides I must admit that things seem (a bit) more complicated than I thought. Personally, I’m all onboard with the above definition.

Also, there is an ongoing fight about the actual number of deaths attributable to medical errors. I don’t have the time to read or get into that fight. So I’ll ask only one question: does it matter if the number is in the hundreds of thousands or merely tens of thousands? No, it doesn’t; medical errors need to be tackled head on, no matter how many people they kill. There will always be victims because doctors are humans and they make mistakes, like everybody else. But that doesn’t mean that they and their hospitals shouldn’t be held accountable. We, as patients, children and parents of patients, want that number to be as small as possible, is as simple as that. If the processes and methods of counting, assessing, and judging medical errors are kept hidden or worse, buried through misleading or downright false paperwork, then how can we trust the judgment of medical professionals? The authors’ letter to CDC attempts to do just that: by getting the hospitals to acknowledge medical errors on death certificates, the issue is becoming more visible. Where there is visibility and transparency, programs can then be implemented to reduce those numbers, whatever they may be.

Actually, the fact that the number of deaths attributable to medical error is disputable is a case in point; if there was a clear definition of what medical error is and a clear way of tracking it, then we would have a starting point on how to reduce its occurrence. And that’s why I will leave my picture where it is: to support the conversation around the need to better track medical error.

P.S.1 A newer paper, Stockwell et al. (2018) found that 10 % of the pediatric admissions in US hospitals end up with preventable adverse effects, most frequently as a result of  hospital-acquired infections, followed by intravenous line complications, gastrointestinal harms, respiratory-related harms, and other causes (p. 4). The more worrisome fact is that this percentage is unchanged, at least between 2007 and 2012.

P.S.2 Just to make it clear, I will always go to doctors with an MD after their name, even if they make mistakes, because they give me and my loved ones the best chance of healing and survival. Calling out that there is more work to be done to improve our safety, particularity in the washing hands department (can’t believe this is still a thing!), doesn’t mean that I will go in the cuckoo land of homeopathy, chiropracty, and other “alternative” medicine.

REFERENCES:

  1. Makary MA, & Daniel M. (3 May 2016). Medical error – the third leading cause of death in the US. BMJ, 353:i2139. doi: 10.1136/bmj.i2139, PMID: 27143499. ARTICLE | NPR cover
  2. Joo S, Daniel M, Xu T, & Makary, MA (1 May 2016). RE: Methodology used for collecting national health statistics, Open Letter to U.S. Centers for Disease Control and Prevention FREE FULLTEXT PDF

By Neuronicus, 13 October 2019

Teach handwriting in schools!

I have begun this blogpost many times. I have erased it many times. That is because the subject of today – handwriting – is very sensitive for me. Most of what I wrote and subsequently erased was a rant: angry at rimes, full of profanity at other times. The rest were paragraphs that can be easily categorized as pleading, bargaining, imploring to teach handwriting in American schools. Or, if they already do, to do it less chaotically, more seriously, more consistently, with a LOT more practice and hopefully before the child hits puberty.

Because, contrary to most educators’ beliefs, handwriting is not the same as typing. Nor is printing / manuscript writing the same as cursive writing, but that’s another kettle.

Somehow, sometime, a huge disjointment happened between scholarly researchers and educators. In medicine, the findings of researchers tend to take 10-15 years until they start to be believed and implemented in medical practice. In education… it seems that even findings cemented by Nobel prizes 100 years ago are alien to the ranks of educators. It didn’t used to be like that. I don’t know when educators became distrustful of data and science. When exactly did they start to substitute evidence with “feels right” and “it’s our school’s philosophy”. When did they start using “research shows… ” every other sentence without being able to produce a single item, name, citation, paper, anything of said research. When did the educators become so… uneducated. I could write (and rant!) a lot about the subject of handwriting or about what exactly a Masters in Education teaches the educators. But I’m so tired of it before I even begun because I’m doing it for a while now and it’s exhausting. It takes an incredible amount of effort, at least for me, to bring the matter of writing so genteelly, tactfully, and non-threateningly to the attention of the fragile ego of the powers that be in charge of the education of the next generation. Yes, yes, there must be rarae aves among the educators who actually teach and do listen to or read papers on education from peer-reviewed journals; but I didn’t find them. I wonder who the research in education is for, if neither the educators nor policy makers have any clue about it…

Here is another piece of education research which will probably go unremarked by the ones it is intended for, i.e. educators and policy makers. Mueller & Oppenheimer (2014) took a closer look at the note-taking habits of 65 Princeton and 260 UCLA students. The students were instructed to take notes in their usual classroom style from 5 x >15 min long TED talks, which were “interesting but not common knowledge” (p. 1160). Afterwards, the subjects completed a hard working-memory task and answered factual and conceptual questions about the content of the “lectures”.

The students who took notes in writing (I’ll call them longhanders) performed significantly better at conceptual questions about the lecture content that the ones who typed on laptops (typers). The researchers noticed that the typers tend to write verbatim what it’s being said, whereas the longhanders don’t do that, which corresponds directly with their performance. In their words,

“laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.” (Abstract).

Because typing is faster than writing, the typers can afford to not think of what they type and be in a full scribe mode with the brain elsewhere and not listening to a single word of the lecture (believe me, I know, both as a student and as a University professor). Contrary to that, the longhanders cannot write verbatim and must process the information to extract what’s relevant. In the words of cognitive psychologists everywhere and present in every cognitive psychology textbook written over the last 70 years: depth of processing facilitates learning. Maybe that could be taught in a Masters of Education…

Pet peeves aside, the next step in the today’s paper was to see if you force the typers to forgo the verbatim note-taking and do some information processing might improve learning. It did not, presumably because “the instruction to not take verbatim notes was completely ineffective at reducing verbatim content (p = .97)” (p. 1163).

The laptop typers did take more notes though, by word count. So in the next study, the researchers asked the question “If allowed to study their notes, will the typers benefit from their more voluminous notes and show better performance?” This time the researchers made 4 x 7-min long lectures on bats, bread, vaccines, and respiration and tested them 1 week alter. The results? The longhanders who studied performed the best. The verbatim typers performed the worst, particularly on conceptual versus factual questions, despite having more notes.

For the sake of truth and in the spirit of the overall objectivity of this blog, I should note that the paper is not very well done. It has many errors, some of which were statistical and corrected in a Corrigendum, some of which are methodological and can be addressed by a bigger study with more carefully parsed out controls and more controlled conditions, or at least using the same stimuli across studies. Nevertheless, at least one finding is robust as it was replicated across all their studies:

“In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand” (Abstract)

Teachers, teach handwriting! No more “Of course we teach writing, just…, just not now, not today, not this year, not so soon, perhaps not until the child is a teenager, not this grade, not my responsibility, not required, not me…”.

157 handwriting - Copy

REFERENCE: Mueller, PA & Oppenheimer, DM (2014). The Pen Is Mightier Than the Keyboard: Advantages of Longhand Over Laptop Note Taking. Psychological Science, 25(6): 1159–1168. DOI: 10.1177/0956797614524581. ARTICLE | FULLTEXT PDF | NPR cover

By Neuronicus, 1 Sept. 2019

P. S. Some of my followers pointed me to a new preregistered study that failed to replicate this paper (thanks, followers!). Urry et al. (2019) found that the typers have more words and take notes verbatim, just as Mueller & Oppenheimer (2014) found, but this did not benefit the typers, as there wasn’t any difference between conditions when it came to learning without study.

The authors did not address the notion that “depth of processing facilitates learning” though, a notion which is now theory because it has been replicated ad nauseam in hundreds of thousands of papers. Perhaps both papers can be reconciled if a third study were to parse out the attention component of the experiments by, perhaps, introspection questionnaires. What I mean is that the typers can do mindless transcription and there is no depth of processing, resulting in the Mueller & Oppenheimer (2014) observation or they can actually pay attention to what they type and then there is depth of processing, in which case we have Urry et al. (2019) findings. But the longhanders have no choice but to pay attention because they cannot write verbatim, so we’re back to square one, in my mind, that longhanders will do better overall. Handwriting your notes is the safer bet for retention then, because your attention component is not voluntary, but required for the task, as it were, at hand.

REFERENCE: Urry, H. L. (2019, February 9). Don’t Ditch the Laptop Just Yet: A Direct Replication of Mueller and Oppenheimer’s (2014) Study 1 Plus Mini-Meta-Analyses Across Similar Studies. PsyArXiv. doi:10.31234/osf.io/vqyw6. FREE FULLTEXT PDF

By Neuronicus, 2 Sept. 2019

Pic of the day: African dogs sneeze to vote

156_dog sneeze - CopyExcerpt from Walker et al. (2017), p. 5:

“We also find an interaction between total sneezes and initiator POA in rallies (table 1) indicating that the number of sneezes required to initiate a collective movement differed according to the dominance of individuals involved in the rally. Specifically, we found that the likelihood of rally success increases with the dominance of the initiator (i.e. for lower POA categories) with lower-ranking initiators requiring more sneezes in the rally for it to be successful (figure 2d). In fact, our raw data and the resultant model showed that rallies never failed when a dominant (POA1) individual initiated and there were at least three sneezes, whereas rallies initiated by lower ranking individuals required a minimum of 10 sneezes to achieve the same level of success. Together these data suggest that wild dogs use a specific vocalization (the sneeze) along with a variable quorum response mechanism in the decision-making process. […]. We found that sneezes, a previously undocumented unvoiced sound in the species, are positively correlated with the likelihood of rally success preceding group movements and may function as a voting mechanism to establish group consensus in an otherwise despotically driven social system.”

REFERENCE: Walker RH, King AJ, McNutt JW, & Jordan NR (6 Sept. 2017). Sneeze to leave: African wild dogs (Lycaon pictus) use variable quorum thresholds facilitated by sneezes in collective decisions. Proceedings of the Royal Society B. Biological Sciences, 284(1862). pii: 20170347. doi: 10.1098/rspb.2017.0347. PMID: 28878054, PMCID: PMC5597819, DOI: 10.1098/rspb.2017.0347 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 August 2019

Education raises intelligence

Intelligence is a dubious concept in psychology and biology because it is difficult to define. In any science, something has a workable definition when it is described by unique testable operations or observations. But “intelligence” had eluded that workable definition, having gone through multiple transformations in the past hundred years or so, perhaps more than any other psychological construct (except “mind”). Despite Binet’s first claim more than a century ago that there is such a thing as IQ and he has a way to test for it, many psychologists and, to a lesser extent, neuroscientists are still trying to figure out what it is. Neuroscientists to a lesser extent because once the field as a whole could not agree upon a good definition, it moved on to the things that they can agree upon, i.e. executive functions.

Of course, I generalize trends to entire disciplines and I shouldn’t; not all psychology has a problem with operationalizations and replicability, just as not all neuroscientists are paragons of clarity and good science. In fact, the intelligence research seems to be rather vibrant, judging by the publications number. Who knows, maybe the psychologists have reached a consensus about what the thing is. I haven’t truly kept up with the IQ research, partly because I think the tests used for assessing it are flawed (therefore you don’t know what exactly you are measuring) and tailored for a small segment of the population (Western society, culturally embedded, English language conceptualizations etc.) and partly because the circularity of definitions (e.g. How do I know you are highly intelligent? You scored well at IQ tests. What is IQ? What the IQ tests measure).

But the final nail in the coffin of intelligence research for me was a very popular definition of Legg & Hutter in 2007: intelligence is “the ability to achieve goals”. So the poor, sick, and unlucky are just dumb? I find this definition incredibly insulting to the sheer diversity within the human species. Also, this definition is blatantly discriminatory, particularly towards the poor, whose lack of options, access to good education or to a plain healthy meal puts a serious brake on goal achievement. Alternately, there are people who want for nothing, having been born in opulence and fame but whose intellectual prowess seems to be lacking, to put it mildly, and owe their “goal achievement” to an accident of birth or circumstance. The fact that this definition is so accepted for human research soured me on the entire field. But I’m hopeful that the researchers will abandon this definition more suited for computer programs than for human beings; after all, paradigmatic shifts happen all the time.

In contrast, executive functions are more clearly defined. The one I like the most is that given by Banich (2009): “the set of abilities required to effortfully guide behavior toward a goal”. Not to achieve a goal, but to work toward a goal. With effort. Big difference.

So what are those abilities? As I said in the previous post, there are three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Now I might have left you with the impression that intelligence = executive functioning and that wouldn’t be true. There is a clear correspondence between executive functioning and intelligence, but it is not a perfect correspondence and many a paper (and a book or two) have been written to parse out what is which. For me, the most compelling argument that executive functions and whatever it is that the IQ tests measure are at least partly distinct is that brain lesions that affect one may not affect the other. It is beyond the scope of this blogpost to analyze the differences and similarities between intelligence and executive functions. But to clear up just a bit of the confusion I will say this broad statement: executive functions are the foundation of intelligence.

There is another qualm I have with the psychological research into intelligence: a big number of psychologists believe intelligence is a fixed value. In other words, you are born with a certain amount of it and that’s it. It may vary a bit, depending on your life experiences, either increasing or decreasing the IQ, but by and large you’re in the same ball-park number. In contrast, most neuroscientists believe all executive functions can be drastically improved with training. All of them.

After this much semi-coherent rambling, here is the actual crux of the post: intelligence can be trained too. Or I should say the IQ can be raised with training. Ritchie & Tucker-Drob (2018) performed a meta-analysis looking at over 600,000 healthy participants’ IQ and their education. They confirmed a previously known observation that people who score higher at IQ tests complete more years of education. But why? Is it because highly intelligent people like to learn or because longer education increases IQ? After carefully and statistically analyzing 42 studies on the subject, the authors conclude that the more educated you are, the more intelligent you become. How much more? About 1 to 5 IQ points per 1 additional year of education, to be precise. Moreover, this effect persists for a lifetime; the gain in intelligence does not diminish with the passage of time or after exiting school.

This is a good paper, its conclusions are statistically robust and consistent. Anybody can check it out as this article is an open access paper, meaning that not only the text but its entire raw data, methods, everything about it is free for everybody.

155 education and iq

For me, the conclusion is inescapable: if you think that we, as a society, or you, as an individual, would benefit from having more intelligent people around you, then you should support free access to good education. Not exactly where you thought I was going with this, eh ;)?

REFERENCE: Ritchie SJ & Tucker-Drob EM. (Aug, 2018, Epub 18 Jun 2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8):1358-1369. PMID: 29911926, PMCID: PMC6088505, DOI: 10.1177/0956797618774253. ARTICLE | FREE FULLTEXT PDF | SUPPLEMENTAL DATA  | Data, codebooks, scripts (Mplus and R), outputs

Nota bene: I’d been asked what that “1 additional year” of education means. Is it with every year of education you gain up to 5 IQ points? No, not quite. Assuming I started as normal IQ, then I’d be… 26 years of education (not counting postdoc) multiplied by let’s say 3 IQ points, makes me 178. Not bad, not bad at all. :))). No, what the authors mean is that they had access to, among other datasets, a huge cohort dataset from Norway from the moment when they increased the compulsory education by 2 years. So the researchers could look at the IQ tests of the people before and after the policy change, which were administered to all males at the same age when they entered compulsory military service. They saw the increase in 1 to 5 IQ points per each extra 1 year of education.

By Neuronicus, 14 July 2019

Gaming can improve cognitive flexibility

It occurred to me that my blog is becoming more sanctimonious than I’d like. I have many posts about stuff that’s bad for you: stress, high fructose corn syrup, snow, playing soccer, cats, pesticides, religion, climate change, even licorice. So I thought to balance it a bit with stuff that is good for you. To wit, computer games; albeit not all, of course.

An avid gamer myself, those who know me would hardly be surprised that I found a paper cheering StarCraft. A bit of an old game, but still a solid representative of the real-time strategy (RTS) genre.

About a decade ago, a series of papers emerged which showed that first-person shooters and action games in general improve various aspects of perceptual processing. It makes sense because in these games split second decisions and actions make the difference between win or lose, so the games act as training experience for increased sensitivity to cues that facilitate said decisions. But what about games where the overall strategy and micromanagement skills are a bit more important than the perceptual skills, a.k.a. RTS? Would these games improve the processes underlying strategical thinking in a changing environment?

Glass, Maddox, & Love (2013) sought to answer this question by asking a few dozen undergraduates with little gaming experience to play a slightly modified StarCraft game for 40 hours (1 hour per day). “StarCraft (published by Blizzard Entertainment, Inc. in 1998) (…) involves the creation, organization, and command of an army against an enemy army in a real-time map-based setting (…) while managing funds, resources, and information regarding the opponent ” (p. 2). The participants were all female because they couldn’t find enough male undergraduates that played computer games less than 2 hours per day. The control group had to play The Sims 2 for the same amount of time, a game where “participants controlled and developed a single ‘‘family household’’ in a virtual neighborhood” (p.3.). The researchers cleverly modified the StarCraft game in such a way that they replaced a perceptual component with a memory component (disabled some maps) and created two versions: one more complex (full-map, two friendly, two enemy bases) and one less so (half-map, one friendly, one enemy bases). The difficulty for all games was set at a win rate of 50%.

Before and after the game-playing, the subjects were asked to complete a huge battery of tests destined to test their memory and various other cognitive processes. By carefully parsing these out, the authors conclude that “forty hours of training within an RTS game that stresses rapid and simultaneous maintenance, assessment, and coordination between multiple information and action sources was sufficient” to improve cognitive flexibility. Moreover, authors point out that playing on a full-map with multiple allies and enemies is conducive to such improvement, whereas playing a less cognitive resources demanding game, despite similar difficulty levels, was not. Basically, the more stuff you have to juggle, the better your flexibility will be. Makes sense.

My favorite take from this paper though is not only that StarCraft is awesome, obviously, but that “cognitive flexibility is a trainable skill” (p. 5). Let me tell you why that is so grand.

Cognitive flexibility is an important concept in the neuroscience of executive functioning. The same year that this paper was published, Diamond was publishing an excellent review paper in which she neatly identified three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Unlike some old views on the immutability of the inborn IQ, each one of the core and higher-order executive functions can be improved upon with training at any point in life and can suffer if something is not right in your life (stress, loneliness, sleep-deprived or sick). This paper adds to the growing body of evidence showing that executive functions can be trainable. Intelligence, however you want to define it, relies upon executive functions, at least some of them, and perhaps boosting cognitive flexibility might result in a slight increase in the IQ, methinks.

Bottom line: real-time strategy games with huge maps and tons of stuff to do are good for you. Here you go.

154 starcraft - Copy
The StarCraft images, both foreground and background, are copyrighted to © 1998 Blizzard Entertainment.

REFERENCES:

  1. Glass BD, Maddox WT, Love BC. (7 Aug 2013). Real-time strategy game training: emergence of a cognitive flexibility trait. PLoS One, 2;8(8):e70350. eCollection 2013. PMID: 23950921, PMCID: PMC3737212, DOI: 10.1371/journal.pone.0070350. ARTICLE | FREE FULLTEXT PDF
  2. Diamond A (2013, Epub 27 Sept. 2012). Executive Functions. 64:135-68. PMID: 23020641, PMCID: PMC4084861, DOI: 10.1146/annurev-psych-113011-143750. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 15 June 2019

The FIRSTS: Guanine comes from guano (1846)

This is about naming things in science. You have been warned!

The DNA is made of four nucleobases: adenine (A), thymine (T), cytosine (C) and guanine (G). The “letters” of the code. Each of them has been named from where they were originally obtained by the scientists who first identified and/or isolated them.

Adenine was named thus because it was extracted from the pancreas of an ox, which is aden in Greek (the pancreas, not the ox), by the Nobel laureate Albrecht Kossel in 1885.

Thymine comes from thymic acid, which was extracted from the thymus gland of calves by the same Albrecht Kossel and Albert Neumann in 1893.

A year later, the duo named cytosine, another base obtained from the same thymus tissue. Cyto- pertains to cells in Greek.

Fifty years before that, Julion Bodo Unger, a German chemist, extracted the guanine from the guano of sea birds. Why was he looking at bird poop, curious minds inquire? Because he was studying it for its uses as fertilizer. The year of discovery was 1844 and the year of the naming was 1846.

And now you know…

153 guano - Copy

REFERENCE: Unger, JB (1846). Bemerkungen zu obiger Notiz (Comments on the above notice), Annalen der Chemie und Pharmacie, 58: 18-20. From page 20: … desshalb möchte ich den Namen Guanin vorschlagen, welcher an seine Herkunft erinnert.” ( “… therefore I would like to suggest the name guanine, which is reminiscent of its origin”.) (Wikipedia translation). Google Books | Google Book PDF

By Neuronicus, 3 June 2019

The FIRSTS: Lack of happy events in depression (2003)

My last post focused on depression and it reminded me of something that I keep telling my students and they all react with disbelief. Well, I tell them a lot of things to which they react with disbelief, to be sure, but this one I keep thinking it should not generate such incredulity. The thing is: depressed people perceive the same amount of negative events happening to them as healthy people, but far fewer positive ones. This seems to be counter-intuitive to non-professionals, who believe depressed people are just generally sadder than average and that’s why they see the half-empty side of the glass of life.

So I dug out the original paper who found this… finding. It’s not as old as you might think. Peeters et al. (2003) paid $30/capita to 86 people, 46 of which were diagnosed with Major Depressive Disorder and seeking treatment in a community mental health center or outpatient clinic (this is in Netherlands). None were taking antidepressants or any other drugs, except low-level anxiolytics. Each participant was given a wristwatch that beeped 10 times a day at semi-random intervals of approximately 90 min. When the watch beeped, the subjects had to complete a form within maximum 25 min answering questions about their mood, currents events, and their appraisal of those events. The experiment took 6 days, including weekend.

The results? Contrary to popular belief, people with depression “did not report more frequent negative events, although they did report fewer positive events and appraised both types of events as more stressful” (p. 208). In other words, depressed people are not seeing half-empty glasses all the time; instead, they don’t see the half-full glasses. Note that they regarded both negative and positive events as stressful. We circle back to the ‘stress is the root of all evil‘ thing.

I would have liked to see if the decrease in positive affect and perceived happy events correlates with increased sadness. The authors say that “negative events were appraised as more unpleasant, more important, and more stressful by the depressed than by the healthy participants ” (p. 206), but, curiously, the  mood was assessed with ratings on the feeling anxious, irritated, restless, tense, guilty, irritable, easily distracted, and agitate, and not a single item on depression-iconic feelings: sad, empty, hopeless, worthless.

Nevertheless, it’s a good psychological study with in depth statistical analyses. I also found thought-provoking this paragraph: “The literature on mood changes in daily life is dominated by studies of daily hassles. The current results indicate that daily uplifts are also important determinants of mood, in both depressed and healthy people” (p. 209).

152 depression and lack of happy events - Copy

REFERENCE: Peeters F, Nicolson NA, Berkhof J, Delespaul P, & deVries M. (May 2003). Effects of daily events on mood states in major depressive disorder. Journal of Abnormal Psychology, 112(2):203-11. PMID: 12784829, DOI: 10.1037/0021-843X.112.2.203. ARTICLE

By Neuronicus, 4 May 2019

Epigenetics of BDNF in depression

Depression is the leading cause of disability worldwide, says the World Health Organization. The. The. I knew it was bad, but… ‘the’? More than 300 million people suffer from it worldwide and in many places fewer than 10% of these receive treatment. Lack of treatment is due to many things, from lack of access to healthcare to lack of proper diagnosis; and not in the least due to social stigma.

To complicate matters, the etiology of depression is still not fully elucidated, despite hundreds of thousand of experimental articles published out-there. Perhaps millions. But, because hundreds of thousands of experimental articles perhaps millions have been published, we know a helluva a lot about it than, say, 50 years ago. The enormous puzzle is being painstakingly assembled as we speak by scientists all over the world. I daresay we have a lot of pieces already, if not all at least 3 out of 4 corners, so we managed to build a not so foggy view of the general picture on the box lid. Here is one of the hottest pieces of the puzzle, one of those central pieces that bring the rabbit into focus.

Before I get to the rabbit, let me tell you about the corners. In the fifties people thought that depression is due to having too little neurotransmitters from the monoamine class in the brain. This thought did not arise willy-nilly, but from the observation that drugs that increase monoamine levels in the brain alleviate depression symptoms, and, correspondingly, drugs which deplete monoamines induce depression symptoms. A bit later on, the monoamine most culpable was found to be serotonin. All well and good, plenty of evidence, observational, correlational, causational, and mechanistic supporting the monoamine hypothesis of depression. But two more pieces of evidence kept nagging the researchers. The first one was that the monoamine enhancing drugs take days to weeks to start working. So, if low on serotonin is the case, then a selective serotonin reuptake inhibitor (SSRI) should elevate serotonin levels within maximum an hour of ingestion and lower symptom severity, so how come it takes weeks? The second was even more eyebrow raising: these monoamine-enhancing drugs work in about 50 % of the cases. Why not all? Or, more pragmatically put, why not most of all if the underlying cause is the same?

It took decades to address these problems. The problem of having to wait weeks until some beneficial effects of antidepressants show up has been explained away, at least partly, by issues in the serotonin regulation in the brain (e.g. autoreceptors senzitization, serotonin transporter abnormalities). As for the second problem, the most parsimonious answer is that that archeological site called DSM (Diagnostic and Statistical Manual of Mental Disorders), which psychologists, psychiatrists, and scientists all over the world have to use to make a diagnosis is nothing but a garbage bag of last century relics with little to no resemblance of this century’s understanding of the brain and its disorders. In other words, what DSM calls major depressive disorder (MDD) may as well be more than one disorder and then no wonder the antidepressants work only in half of the people diagnosed with it. As Goldberg put it in 2011, “the DSM diagnosis of major depression is made when a patient has any 5 out of 9 symptoms, several of which are opposites [emphasis added]”! He was referring to DSM-4, not that the 5 is much different. I mean, paraphrasing Goldberg, you really don’t need much of a degree other than some basic intro class in the physiology of whatever, anything really, to suspect that someone who’s sleeping a lot, gains weight, has increased appetite, appears tired or slow to others, and feels worthless might have a different cause for these symptoms than someone who has daily insomnias, lost weight recently, has decreased appetite, is hyperagitated, irritable, and feels excessive guilt. Imagine how much more understanding we would have about depression if scientists didn’t use the DSM for research. No wonder that there’s a lot of head scratching when your hypothesis, which is logically correct, paradigmatically coherent, internally consistent, flawlessly tested, turns out to be true only sometimes because your ‘depressed’ subjects are as a homogeneous group as a pack of Trail Mix.

I got sidetracked again. This time ranting against DSM. No matter, I’m back on track. So. The good thing about the work done trying to figure out how antidepressants work and psychiatrists’ minds work (DSM is written overwhelmingly by psychiatrists), scientists uncovered other things about depression. Some of the findings became clumped under the name ‘the neurotrophic hypothesis of depression’ in the early naughts. It stems from the finding that some chemicals needed by neurons for their cellular happiness are in low amount in depression. Almost two decades later, the hypothesis became mainstream theory as it explains away some other findings in depression, and is not incompatible with the monoamines’ behavior. Another piece of the puzzle found.

One of these neurotrophins is called brain-derived neurotrophic factor (BDNF), which promotes cell survival and growth. Crucially, it also regulates synaptic plasticity, without which there would be no learning and no memory. The idea is that exposure to adverse events generates stress. Stress is differently managed by different people, largely due to genetic factors. In those not so lucky at the genetic lottery (how hard they take a stressor, how they deal with it), and in those lucky enough at genetics but not so lucky in life (intense and/or many stressors hit the organism hard regardless how well you take it or how good you are at it), stress kills a lot of neurons, literally, prevents new ones from being born, and prevents the remaining ones from learning well. Including learning on how to deal with the stressors, present and future, so the next time an adverse event happens, even if it is a minor stressor, the person is way more drastically affected. in other words, stress makes you more vulnerable to stressors. One of the ways stress is doing all these is by suppressing BDNF synthesis. Without BDNF, the individual exposed to stress that is exacerbated either by genes or environment ends up unable to self-regulate mood successfully. The more that mood is not regulated, the worse the brain becomes at self-regulating because the elements required for self-regulation, which include learning from experience, are busted. And so the vicious circle continues.

Maintaining this vicious circle is the ability of stressors to change the patterns of DNA expression and, not surprisingly, one of the most common findings is that the BDNF gene is hypermethylated in depression. Hypermethylation is an epigenetic change (a change around the DNA, not in the DNA itself), meaning that the gene in question is less expressed. This means lower amounts of BDNF are produced in depression.

After this long introduction, the today’s paper is a systematic review of one of epigenetic changes in depression: methylation. The 67 articles that investigated the role of methylation in depression were too heterogeneous to make a meta-analysis out of them, so Li et al. (2019) made a systematic review.

The main finding was that, overall, depression is associated with DNA methylation modifications. Two genes stood out as being hypermethylated: our friend BDNF and SLC6A4, a gene involved in the serotonin cycle. Now the question is who causes who: is stress methylating your DNA or does your methylated DNA make you more vulnerable to stress? There’s evidence both ways. Vicious circle, as I said. I doubt that for the sufferer it matters who started it first, but for the researchers it does.

151 bdnf 5htt people - Copy

A little disclaimer: the picture I painted above offers a non-exclusive view on the causes of depression(s). There’s more. There’s always more. Gut microbes are in the picture too. And circulatory problems. And more. But the picture is more than half done, I daresay. Continuing my puzzle metaphor, we got the rabbit by the ears. Now what to do with it…

Well, one thing we can do with it, even with only half-rabbit done, is shout loud and clear that depression is a physical disease. And those who claim it can be cured by a positive attitude and blame the sufferers for not ‘trying hard enough’ or not ‘smiling more’ or not ‘being more positive’ can bloody well shut up and crawl back in the medieval cave they came from.

REFERENCES:

1. Li M, D’Arcy C, Li X, Zhang T, Joober R, & Meng X (4 Feb 2019). What do DNA methylation studies tell us about depression? A systematic review. Translational Psychiatry, 9(1):68. PMID: 30718449, PMCID: PMC6362194, DOI: 10.1038/s41398-019-0412-y. ARTICLE | FREE FULLTEXT PDF

2. Goldberg D (Oct 2011). The heterogeneity of “major depression”. World Psychiatry, 10(3):226-8. PMID: 21991283, PMCID: PMC3188778. ARTICLE | FREE FULLTEXT PDF

3. World Health Organization Depression Fact Sheet

By Neuronicus, 23 April 2019

High fructose corn syrup IS bad for you

Because I cannot leave controversial things well enough alone – at least not when I know there shouldn’t be any controversy – my ears caught up with my tongue yesterday when the latter sputtered: “There is strong evidence for eliminating sugar from commonly used food products like bread, cereal, cans, drinks, and so on, particularly against that awful high fructose corn syrup”. “Yeah? You “researched” that up, haven’t you? Google is your bosom friend, ain’t it?” was the swift reply. Well, if you get rid of the ultra-emphatic air-quotes flanking the word ‘researched’ and replace ‘Google’ with ‘Pubmed’, then, yes, I did researched it and yes, Pubmed is my bosom friend.

Initially, I wanted to just give you all a list with peer-reviewed papers that found causal and/or correlational links between high fructose corn syrup (HFCS) and weight gain, obesity, type 2 diabetes, cardiovascular disease, fatty liver disease, metabolic and endocrine anomalies and so on. But there are way too many of them; there are over 500 papers on the subject in Pubmed only. And most of them did find that HFCS does nasty stuff to you, look for yourselves here. Then I thought to feature a paper showing that HFCS is differently metabolized than the fructose from fruits, because I keep hearing that lie perpetrated by the sugar and corn industries that “sugar is sugar” (no, it’s not! Demonstrably so!), but I doubt my yesterday’s interlocutor would care about liver’s enzymatic activity and other chemical processes with lots of acronyms. So, finally, I decided to feature a straight forward, no-nonsense paper, published recently, done at a top tier university, with human subjects, so I won’t hear any squabbles.

Price et al. (2018) studied 49 healthy subjects aged age 18–40 yr, of normal and stable body weight, and free from confounding medications or drugs, whose physical activity and energy-balanced meals were closely monitored. During the study, the subjects’ food and drink intake as well as their timing were rigorously controlled. The researchers varied only the beverages between groups, in such a way that one group received a drink sweetened with HFCS-55 (55% fructose, 45% glucose, as the one used in commercially available drinks) with every controlled meal, whereas the other group received an identical drink in size (adjusted for their energy requirements in such a way that it provided the same 25% of it), but sweetened with aspartame. The study lasted two weeks. No other beverage was allowed, including fruit juice. Urine samples were collected daily and blood samples 4 times per day.

There was a body weight increase of 810 grams (1.8 lb) in subjects consuming HFCS-sweetened beverages for 2 weeks when compared with aspartame controls. The researches also found differences in the levels of a whole host of acronyms (ppTG, ApoCIII, ApoE, OEA, DHEA, DHG, if you must know) involved in a variety of nasty things, like obesity, fatty liver disease, atherosclerosis, cardiovascular disease, stroke, diabetes, even Alzheimer’s.

This study is the third part of a larger NIH-funded study which investigates the metabolic effects of consuming sugar-sweetened beverages in about 200 participants over 5 years, registered at clinicaltrials.gov as NCT01103921. The first part (Stanhope et al., 2009) reported that consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans” (title), and the second part (Stanhope et al., 2015) found that “consuming beverages containing 10%, 17.5%, or 25% of energy requirements from HFCS produced dose-dependent increases in circulating lipid/lipoprotein risk factors for cardiovascular disease and uric acid within 2 weeks” (Abstract). They also found a dose-dependant increase in body weight, but in those subjects the results were not statistically significant (p = 0.09) after correcting for multiple comparisons. But I’ll bet that if/when the authors will publish all the data in one paper at the end of clinical trials they will have more statistical power and the trend in weight gain more obvious, as in the present paper.  Besides, it looks like there may be more than three parts to this study anyway.

The adverse effects of a high sugar diet, particularly in HFCS, are known to so many researchers in the field that they have been actually compiled in a name: the “American Lifestyle-Induced Obesity Syndrome model, which included consumption of a high-fructose corn syrup in amounts relevant to that consumed by some Americans” (Basaranoglu et al., 2013). It doesn’t refer only to increases in body weight, but also type 2 diabetes, cardiovascular disease, hypertriglyceridemia, fatty liver disease, atherosclerosis, gout, etc.

The truly sad part is that avoiding added sugars in diets in USA is impossible unless you do all – and I mean all – your cooking home, including canning, jamming, bread-making, condiment-making and so on, not just “Oh, I’ll cook some chicken or ham tonight” because in that case you end up using canned tomato sauce (which has added sugar), bread crumbs (which have added sugar), or ham (which has added sugar), salad dressing (which has sugar) and so on. Go on, check your kitchen and see how many ingredients have sugar in them, including any meat products short of raw meat. If you never read the backs of the bottles, cans, or packages, oh my, are you in for a big surprise if you live in USA…

There are lot more studies out there on the subject, as I said, of various levels of reading difficulty. This paper is not easy to read for someone outside the field, that’s for sure. But the main gist of it is in the abstract, for all to see.

150 hfcs - Copy

P.S. 1. Please don’t get me wrong: I am not against sugar in desserts, let it be clear. Nobody makes a more mean sweetalicious chocolate cake or carbolicious blueberry muffin than me, as I have been reassured many times. But I am against sugar in everything. You know I haven’t found in any store, including high-end and really high-end stores a single box of cereal of any kind without sugar? Just for fun, I’d like to be a daredevil and try it once. But there ain’t. Not in USA, anyway. I did find them in EU though. But I cannot keep flying over the Atlantic in the already crammed at premium luggage space unsweetened corn flakes from Europe which are probably made locally, incidentally and ironically, with good old American corn.

P.S. 2 I am not so naive, blind, or zealous to overlook the studies that did not find any deleterious effects of HFCS consumption. Actually, I was on the fence about HFCS until about 10 years ago when the majority of papers (now overwhelming majority) was showing that HFCS consumption not only increases weight gain, but it can also lead to more serious problems like the ones mentioned above. Or the few papers that say all added sugar is bad, but HFCS doesn’t stand out from the other sugars when it comes to disease or weight gain. But, like with most scientific things, the majority has it its way and I bow to it democratically until the new paradigm shift. Besides, the exposés of Kearns et al. (2016a, b, 2017) showing in detail and with serious documentation how the sugar industry paid prominent researchers for the past 50 years to hide the deleterious effects of added sugar (including cancer!) further cemented my opinion about added sugar in foods, particularly HFCS.

References:

  1. Price CA, Argueta DA, Medici V, Bremer AA, Lee V, Nunez MV, Chen GX, Keim NL, Havel PJ, Stanhope KL, & DiPatrizio NV (1 Aug 2018, Epub 10 Apr 2018). Plasma fatty acid ethanolamides are associated with postprandial triglycerides, ApoCIII, and ApoE in humans consuming a high-fructose corn syrup-sweetened beverage. American Journal of Physiology. Endocrinology and Metabolism, 315(2): E141-E149. PMID: 29634315, PMCID: PMC6335011 [Available on 2019-08-01], DOI: 10.1152/ajpendo.00406.2017. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Medici V2, Bremer AA2, Lee V2, Lam HD2, Nunez MV2, Chen GX2, Keim NL2, Havel PJ (Jun 2015, Epub 22 Apr 2015). A dose-response study of consuming high-fructose corn syrup-sweetened beverages on lipid/lipoprotein risk factors for cardiovascular disease in young adults. The American Journal of Clinical Nutrition, 101(6):1144-54. PMID: 25904601, PMCID: PMC4441807, DOI: 10.3945/ajcn.114.100461. ARTICLE | FREE FULTEXT PDF
  1. Stanhope KL1, Schwarz JM, Keim NL, Griffen SC, Bremer AA, Graham JL, Hatcher B, Cox CL, Dyachenko A, Zhang W, McGahan JP, Seibert A, Krauss RM, Chiu S, Schaefer EJ, Ai M, Otokozawa S, Nakajima K, Nakano T, Beysen C, Hellerstein MK, Berglund L, Havel PJ (May 2009, Epub 20 Apr 2009). Consuming fructose-sweetened, not glucose-sweetened, beverages increases visceral adiposity and lipids and decreases insulin sensitivity in overweight/obese humans. The Journal of Clinical Investigation,119(5):1322-34. PMID: 19381015, PMCID: PMC2673878, DOI:10.1172/JCI37385. ARTICLE | FREE FULTEXT PDF

(Very) Selected Bibliography:

Bocarsly ME, Powell ES, Avena NM, Hoebel BG. (Nov 2010, Epub 26 Feb 2010). High-fructose corn syrup causes characteristics of obesity in rats: increased body weight, body fat and triglyceride levels. Pharmacology, Biochemistry, and Behavior, 97(1):101-6. PMID: 20219526, PMCID: PMC3522469, DOI: 10.1016/j.pbb.2010.02.012. ARTICLE | FREE FULLTEXT PDF

Kearns CE, Apollonio D, Glantz SA (21 Nov 2017). Sugar industry sponsorship of germ-free rodent studies linking sucrose to hyperlipidemia and cancer: An historical analysis of internal documents. PLoS Biology, 15(11):e2003460. PMID: 29161267, PMCID: PMC5697802, DOI: 10.1371/journal.pbio.2003460. ARTICLE | FREE FULTEXT PDF

Kearns CE, Schmidt LA, Glantz SA (1 Nov 2016). Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents. JAMA Internal Medicine, 176(11):1680-1685. PMID: 27617709, PMCID: PMC5099084, DOI: 10.1001/jamainternmed.2016.5394. ARTICLE | FREE FULTEXT PDF

Mandrioli D, Kearns CE, Bero LA (8 Sep 2016). Relationship between Research Outcomes and Risk of Bias, Study Sponsorship, and Author Financial Conflicts of Interest in Reviews of the Effects of Artificially Sweetened Beverages on Weight Outcomes: A Systematic Review of Reviews. PLoS One, 11(9):e0162198.PMID: 27606602, PMCID: PMC5015869, DOI: 10.1371/journal.pone.0162198. ARTICLE | FREE FULTEXT PDF

By Neuronicus, 22 March 2019

Love and the immune system

Valentine’s day is a day when we celebrate romantic love (well, some of us tend to) long before the famous greeting card company Hallmark was established. Fittingly, I found the perfect paper to cover for this occasion.

In the past couple of decades it became clear to scientists that there is no such thing as a mental experience that doesn’t have corresponding physical changes. Why should falling in love be any different? Several groups have already found that levels of some chemicals (oxytocin, cortisol, testosterone, nerve growth factor, etc.) change when we fall in love. There might be other changes as well. So Murray et al. (2019) decided to dive right into it and check how the immune system responds to love, if at all.

For two years, the researchers looked at certain markers in the immune system of 47 women aged 20 or so. They drew blood when the women reported to be “not in love (but in a new romantic relationship), newly in love, and out-of-love” (p. 6). Then they sent their samples to their university’s Core to toil over microarrays. Microarray techniques can be quickly summarized thusly: get a bunch of molecules of interest, in this case bits of single-stranded DNA, and stick them on a silicon plate or a glass slide in a specific order. Then you run your sample over it and what sticks, sticks, what not, not. Remember that DNA loves to be double stranded, so any single strand will stick to their counterpart, called complementary DNA. You put some fluorescent dye on your genes of interest and voilà, here you have an array of genes expressed in a certain type of tissue in a certain condition.

Talking about microarrays got me a bit on memory lane. When fMRI started to be a “must” in neuroscience, there followed a period when the science “market” was flooded by “salad” papers. We called them that because there were so many parts of the brain reported as “lit up” in a certain task that it made a veritable “salad of brain parts” out of which it was very difficult to figure out what’s going on. I swear that now that the fMRI field matured a bit and learned how to correct for multiple comparisons as well as to use some other fancy stats, the place of honor in the vegetable mix analogy has been relinquished to the ‘-omics’ studies. In other words, a big portion of the whole-genome or transcriptome studies became “salad” studies: too many things show up as statistically significant to make head or tail of it.

However, Murray et al. (2019) made a valiant – and successful – effort to figure out what those up- or down- regulated 61 gene transcripts in the immune system cells of 17 women falling in love actually mean. There’s quite a bit I am leaving out but, in a nutshell, love upregulated (that is “increased”) the expressions of genes involved in the innate immunity to viruses, presumably to facilitate sexual reproduction, the authors say.

The paper is well written and the authors graciously remind us that there are some limitations to the study. Nevertheless, this is another fine addition to the unbelievably fast growing body of knowledge regarding human body and behavior.

Pitty that this research was done only with women. I would have loved to see how men’s immune systems respond to falling in love.

149-love antiviral - Copy.jpg

REFERENCE: Murray DR, Haselton MG, Fales M, & Cole SW. (Feb 2019, Epub 2 Oct 2018). Falling in love is associated with immune system gene regulation. Psychoneuroendocrinology, Vol. 100, Pg. 120-126. doi: 10.1016/j.psyneuen.2018.09.043. PMID: 30299259, PMCID: PMC6333523 [Available on 2020-02-01], DOI: 10.1016/j.psyneuen.2018.09.043 ARTICLE

FYI: PMC6333523 [Available on 2020-02-01] means that the fulltext will be available for free to the public one year after the publication on the US governmental website PubMed (https://www.ncbi.nlm.nih.gov/pubmed/), no matter how much Elsevier will charge for it. Always, always, check the PMC library (https://www.ncbi.nlm.nih.gov/pmc/) on PubMed to see if a paper you saw in Nature or Elsevier is for free there because more often than you’d think it is.

PubMed = the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM), comprising of “more than 29 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites” .

PMC = “PubMed Central® (PMC) is a free fulltext archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM)” with a whooping fulltext library of over 5 million papers and growing rapidly. Love PubMed!

By Neuronicus, 14 February 2019