Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.

104-copy

unless you’re left-handed, then things are reversed.

References:

1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016

Save

Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.

sparx1-copy

Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

Drink before sleep

Among the many humorous sayings, puns, and jokes that one inevitably encounters on any social medium account, one that was popular this year was about the similarity between putting a 2 year old to bed and putting your drunk friend to bed, which went like this: they both sing to themselves, request water, mumble and blabber incoherently, do some weird yoga posses, cry, hiccup, and then they pass out. The joke manages to steal a smile only if someone has been through both situations, otherwise it looses its appeal.

Being exposed to both situations, I thought that while the water request from the drunk friend is a response to the dehydrating effects of alcohol, the water request from the toddler is probably nothing more than a delaying tactic to postpone bedtime. Whether there may or may not be some truth to my assumption in the case of the toddler, here is a paper to show that there is definitely more to the water request than meets the eye.

Generally, thirst is generated by the hypothalamus when its neurons and neurons from organum vasculosum lamina terminalis (OVLT) in the brainstem sense that the blood is either too viscous (hypovolaemia) or too salty (hyperosmolality), both phenomena indicating a need for water. Ingesting water would bring these indices to homeostatic values.

More than a decade ago, researchers observed that rodents get a good gulp of water just before going to sleep. This surge was not motivated by thirst because the mice were not feverish, were not hungry and they did not have a too viscous or a too salty blood. So why do it then? If the rodents are restricted from drinking the water they get dehydrated, so obviously the behavior has function. But is not motivated by thirst, at least not the way we know it. Huh… The authors call this “anticipatory thirst”, because it keeps the animal from becoming dehydrated later on.

Since the behavior occurs with regularity, maybe the neurons that control circadian rhythms have something to do with it. So Gizowski et al. (2016) took a closer look at  the activity of clock neurons from the suprachiasmatic nucleus (SCN), a well known hypothalamic nucleus heavily involved in circadian rhythms. The authors did a lot of work on SCN and OVLT neurons: fluorescent labeling, c-fos expression, anatomical tracing, optogenetics, genetic knockouts, pharmacological manipulations, electrophysiological recordings, and behavioral experiments. All these to come to this conclusion:

SCN neurons release vasopressin and that excites the OVLT neurons via V1a receptors. This is necessary and sufficient to make the animal drink the water, even if it’s not thirsty.

That’s a lot of techniques used in a lot of experiments for only three authors. Ten years ago, you needed only one, maybe two techniques to prove the same point. Either there have been a lot of students and technicians who did not get credit (there isn’t even an Acknowledgements section. EDIT: yes, there is, see the comments below or, if they’re missing, the P.S.) or these three authors are experts in all these techniques. In this day and age, I wouldn’t be surprised by either option. No wonder small universities have difficulty publishing in Big Name journals; they don’t have the resources to compete. And without publishing, no tenure… And without tenure, less research… And thus shall the gap widen.

Musings about workload aside, this is a great paper, shedding light on yet another mysterious behavior and elucidating the mechanism behind it. There’s still work to be done though, like answering how accurate is the SCN in predicting bedtime to activate the drinking behavior. Does it take its cues from light only? Does ambient temperature play a role and so on. This line of work can help people that work in shifts to prevent certain health problems. Their SCN is out of rhythm and that can influence deleteriously the activity of a whole slew of organs.

scn-h2o-copy
Summary of the doi: 10.1038/nature19756 findings. 1) The light is a cue for suprachiasmatic nulceus (SCN) that bedtime is near. 2) The SCN vasopressin neurons that project to organum vasculosum lamina terminalis (OVLT) are activated. 3) The OVLT generates the anticipatory thirst. 4) The animal drinks fluids.

Reference: Gizowski C, Zaelzer C, & Bourque CW (28 Sep 2016). Clock-driven vasopressin neurotransmission mediates anticipatory thirst prior to sleep. Nature, 537(7622): 685-688. PMID: 27680940. DOI: 10.1038/nature19756. ARTICLE

By Neuronicus, 5 October 2016

EDIT (12 Oct 2016): P.S. The blog comments are automatically deleted after a period of time. In case of this post that would be a pity because I have been fortunate to receive comments from at least one of the authors of the paper, the PI, Dr. Charles Bourque and, presumably under pseudonym, but I don’t know that for sure, also the first author, Claire Gizowski. So I will include here, in a post scriptum, the main idea of their comments. Here is an excerpt from Dr. Bourque’s comment:

“Let me state for the record that Claire accomplished pretty much ALL of the work in this paper (there is a description of who did what at the end of the paper). More importantly, there were no “unthanked” undergraduates, volunteers or other parties that contributed to this work.”

My hat, Ms. Gizowski. It is tipped. To you. Congratulations! With such an impressive work I am sure I will hear about you again and that pretty soon I will blog about Dr. Gizowski.

Painful Pain Paper

There has been much hype over the new paper published in the latest Nature issue which claims to have discovered an opioid analgesic that doesn’t have most of the side effects of morphine. If the claim holds, the authors may have found the Holy Grail of pain research chased by too many for too long (besides being worth billions of dollars to its discoverers).

The drug, called PZM21, was discovered using structure-based drug design. This means that instead of taking a drug that works, say morphine, and then tweaking its molecular structure in various ways and see if the resultant drugs work, you take the target of the drug, say mu-opioid receptors, and design a drug that fits in that slot. The search and design are done initially with sophisticated software and there are many millions of virtual candidates. So it takes a lot of work and ingenuity to select but a few drugs that will be synthesized and tested in live animals.

Manglik et al. (2016) did just that and they came up with PZM21 which, compared to morphine, is:

1) selective for the mu-opioid receptors (i.e. it doesn’t bind to anything else)
2) produces no respiratory depression (maybe a touch on the opposite side)
3) doesn’t affect locomotion
4) produces less constipation
5) produces long-lasting affective analgesia
6) and has less addictive liability

The Holy Grail, right? Weeell, I have some serious issues with number 5 and, to some extent, number 6 on this list.

Normally, I wouldn’t dissect a paper so thoroughly because, if there is one thing I learned by the end of GradSchool and PostDoc, is that there is no perfect paper out there. Consequently, anyone with scientific training can find issues with absolutely anything published. I once challenged someone to bring me any loved and cherished paper and I would tear it apart; it’s much easier to criticize than to come up with solutions. Probably that’s why everybody hates Reviewer No. 2…

But, for extraordinary claims, you need extraordinary evidence. And the evidence simply does not support the 5 and maybe 6 above.

Let’s start with pain. The authors used 3 tests: hotplate (drop a mouse on a hot plate for 10 sec and see what it does), tail-flick (give an electric shock to the tail and see how fast the mouse flicks its tail) and formalin (inject an inflammatory painful substance in the mouse paw and see what the animal does). They used 3 doses of PZM21 in the hotplate test (10, 20, and 40 mg/Kg), 2 doses in the tail-flick test (10 and 20), and 1 dose in the formalin test (20). Why? If you start with a dose-response in a test and want to convince me it works in the other tests, then do a dose-response for those too, so I have something to compare. These tests have been extensively used in pain research and the standard drug used is morphine. Therefore, the literature is clear on how different doses of morphine work in these tests. I need your dose-responses for your new drug to be able to see how it measures up, since you claim it is “more efficacious than morphine”. If you don’t want to convince me there is a dose-response effect, that’s fine too, I’ll frown a little, but it’s your choice. However, then choose a dose and stick with it! Otherwise I cannot compare the behaviors across tests, rendering one or the other test meaningless. If you’re wondering, they used only one dose of morphine in all the tests, except the hotplate, where they used two.

Another thing also related to doses. The authors found something really odd: PZM21 works (meaning produces analgesia) in the hotplate, but not the tail-flick tests. This is truly amazing because no opiate I know of can make such a clear-cut distinction between those two tests. Buuuuut, and here is a big ‘BUT” they did not test their highest dose (40mg/kg) in the tail-flick test! Why? I’ll tell you how, because I am oh sooo familiar with this argument. It goes like this:

Reviewer: Why didn’t you use the same doses in all your 3 pain tests?

Author: The middle and highest doses have similar effects in the hotplate test, ok? So it doesn’t matter which one of these doses I’ll use in the tail-flick test.

Reviewer: Yeah, right, but, you have no proof that the effects of the two doses are indistinguishable because you don’t report any stats on them! Besides, even so, that argument applies only when a) you have ceiling effects (not the case here, your morphine hit it, at any rate) and b) the drug has the expected effects on both tests and thus you have some logical rationale behind it. Which is not the case here, again: your point is that the drug DOESN’T produce analgesia in the tail-flick test and yet you don’t wanna try its HIGHEST dose… REJECT AND RESUBMIT! Awesome drug discovery, by the way!

So how come the paper passed the reviewers?! Perhaps the fact that two of the reviewers are long term publishing co-authors from the same University had something to do with it, you know, same views predisposes them to the same biases and so on… But can you do that? I mean, have reviewers for Nature from the same department for the same paper?

Alrighty then… let’s move on to the stats. Or rather not. Because there aren’t any for the hotplate or tail-flick! Now, I know all about the “freedom from the tyranny of p” movement (that is: report only the means, standard errors of mean, and confidence intervals and let the reader judge the data) and about the fact that the average scientist today needs to know 100-fold more stats that his predecessors 20 years ago (although some biologists and chemists seem to be excused from this, things either turn color or not, either are there or not etc.) or about the fact that you cannot get away with only one experiment published these days, but you need a lot of them so you have to do a lot of corrections to your stats so you don’t fall into the Type 1 error. I know all about that, but just like the case with the doses, choose one way or another and stick to it. Because there are ANOVAs ran for the formalin test, the respiration, constipation, locomotion, and conditioned place preference tests, but none for the hotplate or tailflick! I am also aware that to be published in Science or Nature you have to strip your work and wordings to the bare minimum because the insane wordcount limits, but you have free rein in the Supplementals. And I combed through those and there are no stats there either. Nor are there any power analyses… So, what’s going on here? Remember, the authors didn’t test the highest dose on the tail-flick test because – presumably – the highest and intermediary doses have indistinguishable effects, but where is the stats to prove it?

And now the thing that really really bothered me: the claim that PZM21 takes away the affective dimension of pain but not the sensory. Pain is a complex experience that, depending on your favourite pain researcher, has at least 2 dimensions: the sensory (also called ‘reflexive’ because it is the immediate response to the noxious stimulation that makes you retract by reflex the limb from whatever produces the tissue damage) and the affective (also called ‘motivational’ because it makes the pain unpleasant and motivates you to get away from whatever caused it and seek alleviation and recovery). The first aspect of pain, the sensory, is relatively easy to measure, since you look at the limb withdrawal (or tail, in the case of animals with prolonged spinal column). By contrast, the affective aspect is very hard to measure. In humans, you can ask them how unpleasant it is (and even those reports are unreliable), but how do you do it with animals? Well, you go back to humans and see what they do. Humans scream “Ouch!” or swear when they get hurt (so you can measure vocalizations in animals) or humans avoid places in which they got hurt because they remember the unpleasant pain (so you do a test called Conditioned Place Avoidance for animals, although if you got a drug that shows positive results in this test, like morphine, you don’t know if you blocked the memory of unpleasantness or the feeling of unpleasantness itself, but that’s a different can of worms). The authors did not use any of these tests, yet they claim that PZM21 takes away the unpleasantness of pain, i.e. is an affective analgesic!

What they did was this: they looked at the behaviors the animal did on the hotplate and divided them in two categories: reflexive (the lifting of the paw) and affective (the licking of the paw and the jumping). Now, there are several issues with this dichotomy, I’m not even going to go there; I’ll just say that there are prominent pain researchers that will scream from the top of their lungs that the so-called affective behaviors from the hotplate test cannot be indexes of pain affect, because the pain affect requires forebrain structures and yet these behaviors persist in the decerebrated rodent, including the jumping. Anyway, leaving the theoretical debate about what those behaviors they measured really mean aside, there still is the problem of the jumpers: namely, the authors excluded from the analysis the mice who tried to jump out of the hotplate test in the evaluation of the potency of PZM21, but then they left them in when comparing the two types of analgesia because it’s a sign of escaping, an emotionally-valenced behavior! Isn’t this the same test?! Seriously? Why are you using two different groups of mice and leaving the impression that is only one? And oh, yeah, they used only the middle dose for the affective evaluation, when they used all three doses for potency…. And I’m not even gonna ask why they used the highest dose in the formalin test… but only for the normal mice, the knockouts in the same test got the middle dose! So we’re back comparing pears with apples again!

Next (and last, I promise, this rant is way too long already), the non-addictive claim. The authors used the Conditioned Place Paradigm, an old and reliable method to test drug likeability. The idea is that you have a box with 2 chambers, X and Y. Give the animal saline in chamber X and let it stay there for some time. Next day, you give the animal the drug and confine it in chamber Y. Do this a few times and on the test day you let the animal explore both chambers. If it stays more in chamber Y then it liked the drug, much like humans behave by seeking a place in which they felt good and avoiding places in which they felt bad. All well and good, only that is standard practice in this test to counter-balance the days and the chambers! I don’t know about the chambers, because they don’t say, but the days were not counterbalanced. I know, it’s a petty little thing for me to bring that up, but remember the saying about extraordinary claims… so I expect flawless methods. I would have also liked to see a way more convincing test for addictive liability like self-administration, but that will be done later, if the drug holds, I hope. Thankfully, unlike the affective analgesia claims, the authors have been more restrained in their verbiage about addiction, much to their credit (and I have a nasty suspicion as to why).

I do sincerely think the drug shows decent promise as a painkiller. Kudos for discovering it! But, seriously, fellows, the behavioral portion of the paper could use some improvements.

Ok, rant over.

EDIT (Aug 25, 2016): I forgot to mention something, and that is the competing financial interests declared for this paper: some of its authors already filed a provisional patent for PZM21 or are already founders or consultants for Epiodyne (a company that that wants to develop novel analgesics). Normally, that wouldn’t worry me unduly, people are allowed to make a buck from their discoveries (although is billions in this case and we can get into that capitalism-old debate whether is moral to make billions on the suffering of other people, but that’s a different story). Anyway, combine the financial interests with the poor behavioral tests and you get a very shoddy thing indeed.

Reference: Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hübner H, Huang XP, Sassano MF, Giguère PM, Löber S, Da Duan, Scherrer G, Kobilka BK, Gmeiner P, Roth BL, & Shoichet BK (Epub 17 Aug 2016). Structure-based discovery of opioid analgesics with reduced side effects. Nature, 1-6. PMID: 27533032, DOI: 10.1038/nature19112. ARTICLE 

By Neuronicus, 21 August 2016

The FIRSTS: Theory of Mind in non-humans (1978)

Although any farmer or pet owner throughout the ages would probably agree that animals can understand the intentions of their owners, not until 1978 has this knowledge been scientifically proven.

Premack & Woodruff (1978) performed a very simple experiment in which they showed videos to a female adult chimpanzee named Sarah involving humans facing various problems, from simple (can’t reach a banana) to complex (can’t get out of the cage). Then, the chimps were shown pictures of the human with the tool that solved the problem (a stick to reach the banana, a key for the cage) along with pictures where the human was performing actions that were not conducive to solving his predicament. The experimenter left the room while the chimp made her choice. When she did, she rang a bell to summon the experimenter back in the room, who then examined the chimp’s choice and told the chimp whether her choice was right or wrong. Regardless of the choice, the chimp was awarded her favorite food. The chimp’s choices were almost always correct when the actor was its favourite trainer, but not so much when the actor was a person she disliked.

Because “no single experiment can be all things to all objections, but the proper combination of results from [more] experiments could decide the issue nicely” (p. 518), the researchers did some more experiments which were variations of the first one designed to figure out what the chimp was thinking. The authors go on next to discuss their findings at length in the light of two dominant theories of the time, mentalism and behaviorism, ruling in favor of the former.

Of course, the paper has some methodological flaws that would not pass the rigors of today’s reviewers. That’s why it has been replicated multiple times in more refined ways. Nor is the distinction between behaviorism and cognitivism a valid one anymore, things being found out to be, as usual, more complex and intertwined than that. Thirty years later, the consensus was that chimps do indeed have a theory of mind in that they understand intentions of others, but they lack understanding of false beliefs (Call & Tomasello, 2008).

95chimpToM - Copy

References:

1. Premack D & Woodruff G (Dec. 1978). Does the chimpanzee have a theory of mind? The Behavioral and Brain Sciences, 1 (4): 515-526. DOI: 10.1017/S0140525X00076512. ARTICLE

2. Call J & Tomasello M (May 2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5): 187-192. PMID: 18424224 DOI: 10.1016/j.tics.2008.02.010. ARTICLE  | FULLTEXT PDF

By Neuronicus, 20 August 2016

Can you tickle yourself?

As I said before, with so many science outlets out there, it’s hard to find something new and interesting to cover that hasn’t been covered already. Admittedly, sometimes some new paper comes out that is so funny or interesting that I too fall in line with the rest of them and cover it. But, most of the time, I try to bring you something that you won’t find it reported by other science journalists. So, I’m sacrificing the novelty for originality by choosing something from my absolutely huge article folder (about 20 000 papers).

And here is the gem for today, titled enticingly “Why can’t you tickle yourself?”. Blakemore, Wolpert & Frith (2000) review several papers on the subject, including some of their own, and arrive to the conclusion that the reason you can’t tickle yourself is because you expect it. Let me explain: when you do a movement that results in a sensation, you have a pretty accurate expectation of how that’s going to feel. This expectation then dampens the sensation, a process probably evolved to let you focus on more relevant things in the environment that on what you’re doing o yourself (don’t let your mind go all dirty now, ok?).

Mechanistically speaking, it goes like this: when you move your arm to tickle your foot, a copy of the motor command you gave to the arm (the authors call this “efference copy”) goes to a ‘predictor’ region of the brain (the authors believe this is the cerebellum) that generates an expectation (See Fig. 1). Once the movement has been completed, the actual sensation is compared to the expected one. If there is a discrepancy, you get tickled, if not, not so much. But, you might say, even when someone else is going to tickle me I have a pretty good idea what to expect, so where’s the discrepancy? Why do I still get tickled when I expect it? Because you can’t fool your brain that easily. The brain then says; “Alright, alright, we expect tickling. But do tell me this, where is that motor command? Hm? I didn’t get any!” So here is your discrepancy: when someone tickles you, there is the sensation, but no motor command, signals 1 and 2 from the diagram are missing.

93 - Copy
Fig. 1. My take on the tickling mechanism after Blakemore, Wolpert & Frith (2000). Credits. Picture: Sobotta 1909, Diagram: Neuronicus 2016. Data: Blakemore, Wolpert & Frith (2002). Overall: Public Domain

Likewise, when someone tickles you with your own hand, there is an attenuation of sensation, but is not completely disappeared, because there is some registration in the brain regarding the movement of your own arm, even if it was not a motor command initiated by you. So you get tickled just a little bit. The brain is no fool: is aware of who had done what and with whose hands (your dirty mind thought that, I didn’t say it!) .

This mechanism of comparing sensation with movement of self and others appears to be impaired in schizophrenia. So when these patients say that “I hear some voices and I can’t shut them up” or ” My hand moved of its own accord, I had no control over it”, it may be that they are not aware of initiating those movements, the self-monitoring mechanism is all wacky. Supporting this hypothesis, the authors conducted an fMRI experiment (Reference 2) where they showed that that the somatosensory and the anterior cingulate cortices show reduced activation when attempting to self-tickle as opposed to being tickled by the experimenter (please, stop that line of thinking…). Correspondingly, the behavioral portion of the experiment showed that the schizophrenics can tickle themselves. Go figure!

94 - Copy

Reference 1: Blakemore SJ, Wolpert D, & Frith C (3 Aug 2000). Why can’t you tickle yourself? Neuroreport, 11(11):R11-6. PMID: 10943682. ARTICLE FULLTEXT

Reference 2: Blakemore SJ, Smith J, Steel R, Johnstone CE, & Frith CD (Sep 2000, Epub 17 October 2000). The perception of self-produced sensory stimuli in patients with auditory hallucinations and passivity experiences: evidence for a breakdown in self-monitoring. Psychological Medicine, 30(5):1131-1139. PMID: 12027049. ARTICLE

By Neuronicus, 7 August 2016

Transcranial direct current stimulation & cognitive enhancement

There’s so much research out there… So much that some time ago I learned that in science, as probably in other fields too, one has only to choose a side of an argument and then, provided that s/he has some good academic search engines skills and institutional access to journals, get the articles that support that side. Granted, that works for relatively small questions restricted to narrow domains, like “is that brain structure involved in x” or something like that; I doubt you would be able to find any paper that invalidates theories like gravity or central dogma of molecular biology (DNA to RNA to protein).

If you’re a scientist trying to answer a question, you’ll probably comb through some dozens papers and form an opinion of your own after weeding out the papers with small sample sizes, the ones with shoddy methodology or simply the bad ones (yes, they do exists, even scientists are people and hence prone to mistakes). And if you’re not a scientist or the question you’re trying to find an answer for is not from your field, then you’ll probably go for reviews or meta-analyses.

Meta-analyses are studies that look at several papers (dozens or hundreds), pool their data together and then apply some complicated statistics to see the overall results. One such meta-analysis concerns the benefits, if any, of transcranial direct current stimulation (tDCS) on working memory (WM) in healthy people.

tDCS is a method of applying electrical current through some electrodes to your neurons to change how they work and thus changing some brain functions. It is similar with repetitive transcranial magnetic stimulation (rTMs), only in the latter case the change in neuronal activity is due to the application of a magnetic field.

Some people look at these methods not only as possible treatment for a variety of disorders, but also as cognitive enhancement tools. And not only by researchers, but also by various companies who sell the relatively inexpensive equipment to gamers and others. But does tDCS work in the first place?

92 conf - Copy (2)

Mancuso et al. (2016) say that there have been 3 recent meta-analyses done on this issue and they found that “the effects [of tDCS on working memory in healthy volunteers] are reliable though small (Hill et al., 2016), partial (Brunoni & Vanderhasselt, 2014), or nonexistent (Horvath et al., 2015)” (p. 2). But they say these studies are somewhat flawed and that’s why they conducted their own meta-analysis, which concludes that “the true enhancement potential of tDCS for WM remains somewhat uncertain” (p.19). Maybe it works a little bit if used during the training phase of a working memory task, like n-back, and even then that’s a maybe…

Boring, you may say. I’ll grant you that. So… all that work and it revealed virtually nothing new! I’ll grant you that too. But what this meta-analysis brings new, besides adding some interesting statistics, like controlling for publication bias, is a nice discussion as to why they didn’t find nothing much, exploring possible causes, like the small sample and effects sizes, which seem to plague many behavioral studies. Another explanation which, to tell you the truth, the authors do not seem to be too enamored with is that, maybe, just maybe, simply, tDCS doesn’t have any effect on working memory, period.

Besides, papers with seemingly boring findings do not catch the media eye, so I had to give it a little attention, didn’t I 😉 ?

Reference: Mancuso LE, Ilieva IP, Hamilton RH, & Farah MJ. (Epub 7 Apr 2016, Aug 2016) Does Transcranial Direct Current Stimulation Improve Healthy Working Memory?: A Meta-analytic Review. Journal of Cognitive Neuroscience, 28(8):1063-89. PMID: 27054400, DOI: 10.1162/jocn_a_00956. ARTICLE

 By Neuronicus, 2 August 2016

Mu suppression and the mirror neurons

A few decades ago, Italian researchers from the University of Parma discovered some neurons in monkey which were active not only when the monkey is performing an action, but also when watching the same action performed by someone else. This kind of neuron, or rather this particular neuronal behavior, had been subsequently identified in humans scattered mainly within the frontal and parietal cortices (front and top of your head) and called the mirror neuron system (MNS). Its role is to understand the intentions of others and thus facilitate learning. Mind you, there are, as it should be in any healthy vigorous scientific endeavor, those who challenge this role and even the existence of MNS.

Hobson & Bishop (2016) do not question the existence of the mirror neurons or their roles, but something else. You see, proper understanding of intentions, actions and emotions of others is severely impaired in autism or some schizophrenias. Correspondingly, there have been reports saying that the MNS function is abnormal in these disorders. So if we can manipulate the neurons that help us understanding others, then we may be able to study the neurons better, and – who knows? – maybe even ‘switch them on’ and ‘off’ when needed (Ha! That’s a scary thought!).

EEG WIKI
Human EEG waves (from Wikipedia, under CC BY-SA 3.0 license)

Anyway, previous work said that recording a weak Mu frequency in the brain regions with mirror neurons show that these neurons are active. This frequency (between 8-13 Hz) is recorded through electroencephalography (EEG). The assumption is as follows: when resting, neurons fire synchronously; when busy, they fire each to its own, so they desynchronize, which leads to a reduction in the Mu intensity.

All well and good, but there is a problem. There is another frequency that overlaps with the Mu frequency and that is the Alpha band. Alpha activity is highest when a person is awake with eyes closed, but diminishes when the person is drowsy or, importantly, when making a mental effort, like paying great attention to something. So, if I see a weak Mu/Alpha frequency when the subject is watching someone grabbing a pencil, is that because the mirror neurons are active or because he’s sleepy? There are a few gimmicks to disentangle between the two, from the setup of the experiment in such a way that it requires same attention demand over tasks to the careful localization of the origin of the two waves (Mu is said to arise from sensoriomotor regions, whereas Alpha comes from more posterior regions).

But Hobson & Bishop (2016) argue that this disentangling is more difficult than previously thought by carrying out a series of experiments where they varied the baseline, in such a way that some were more attentionally demanding than others. After carefully analyzing various EEG waves and electrodes positions in these conditions, they conclude that “mu suppression can be used to index the human MNS, but the effect is weak and unreliable and easily confounded with alpha suppression“.

87mu - Copy

What makes this paper interesting to me, besides its empirical findings, is the way the experiment was conducted and published. This is a true hypothesis driven study, following the scientific method step by step, a credit to us all scientists. In other words, a rare gem.  A lot of other papers are trying to make a pretty story from crappy data or weave some story about the results as if that’s what they went for all along when in fact they did a bunch of stuff and chose what looked good on paper.

Let me explain. As a consequence of the incredible pressure put on researchers to publish or perish (which, believe me, is more than just a metaphor, your livelihood and career depend on it), there is an alarming increase in bad papers, which means

  • papers with inappropriate statistical analyses (p threshold curse, lack of multiple comparisons corrections, like the one brilliantly exposed here),
  • papers with huge databases in which some correlations are bound to appear by chance alone and are presented as meaningful (p-hacking or data fishing),
  • papers without enough data to make a meaningful conclusion (lack of statistical power),
  • papers that report only good-looking results (only positive results required by journals),
  • papers that seek only to provide data to reinforce previously held beliefs (confirmation bias)
  • and so on.

For these reasons (and more), there is a high rate of rejection of papers submitted to journals (about 90%), which means more than just a lack of publication in a good journal; it means wasted time, money and resources, shattered career prospects for the grad students who did the experiments and threatened job security for everybody involved, not to mention a promotion of distrust of science and a disservice to the scientific endeavor in general. So some journals, like Cortex, are moving toward a system called Registered Report, which asks for the rationale and the plan of the experiment before this is conducted, which should protect against many of the above-mentioned plagues. If the plan is approved, the chances to get the results published in that journal are 90%.

This is one of those Registered Report papers. Good for you, Hobson & Bishop!

REFERENCE: Hobson HM & Bishop DVM (Epub April 2016). Mu suppression – A good measure of the human mirror neuron system?. Cortex, doi: 10.1016/j.cortex.2016.03.019 ARTICLE | FREE FULLTEXT PDF | RAW DATA

By Neuronicus, 14 July 2016

Fructose bad effects reversed by DHA, an omega-3 fatty acid

Despite alarm signals raised by various groups and organizations regarding the dangers of the presence of sugars – particularly fructose derived from corn syrup – in almost every food in the markets, only in the past decade there has been some serious evidence against high consumption of fructose.

A bitter-sweet (sic!) paper comes from Meng et al. (2016) who, in addition to showing some bad things that fructose does to brain and body, it also shows some rescue from its deleterious effects by DHA (docosahexaenoic acid), an omega-3 fatty acid.

The authors had 3 groups of rodents: one group got fructose in their water for 6 weeks, another group got fructose and DHA, and another group got their normal chow. The amount of fructose was calculated to be ecologically valid, meaning that they fed the animals the equivalent of 1 litre soda bottle per day (130 g of sugar for a 60 Kg human).

The rats that got fructose had worse learning and memory performance at a maze test compared to the other two groups.

The rats that got fructose had altered gene expression in two brain areas: hypothalamus (involved in metabolism) and hippocampus (involved in learning and memory) compared to the other two groups.

The rats that got fructose had bad metabolic changes that are precursors for Type 2 diabetes, obesity and other metabolic disorders (high blood glucose, triglycerides, insulin, and insulin resistance index) compared to the other two groups.

86 - Copy.jpg

The genetic analyses that the researchers did (sequencing the RNA and analyzing the DNA methylation) revealed a whole slew of the genes that had been affected by the fructose treatment. So, they did some computer work that involved Bayesian modeling  and gene library searching and they selected two genes (Bgn and Fmod) out of almost a thousand possible candidates who seemed to be the drivers of these changes. Then, they engineered mice that lacked these genes. The resultant mice had the same metabolic changes as the rats that got fructose, but… their learning and memory was even better than that of the normals? I must have missed something here. EDIT: Well… yes and no. Please read the comment below from the Principal Investigator of the study.

It is an ok paper, done by the collaboration of 7 laboratories from 3 countries. But there are a few things that bother me, as a neuroscientist, about it. First is the behavior of the genetic knock-outs. Do they really learn faster? The behavioral results are not addressed in the discussion. Granted, a genetic knockout deletes that gene everywhere in the brain and in the body, whereas the genetic alterations induced by fructose are almost certainly location-specific.

Which brings me to the second bother: nowhere in the paper (including the supplemental materials, yeas, I went through those) are any brain pictures or diagrams or anything that can tell us which nuclei of the hypothalamus the samples came from. Hypothalamus is a relatively small structure with drastically different functional nuclei very close to one another. For example, the medial preoptic nucleus that deals with sexual hormones is just above the suprachiasmatic nucleus that deals with circadian rhythms and near the preoptic is the anterior nucleus that deals mainly with thermoregulation. The nuclei that deal with hunger and satiety (the lateral and the ventromedial nucleus, respectively) are located in different parts of the hypothalamus. In short, it would matter very much where they got their samples from because the transcriptome and methylome would differ substantially from nucleus to nucleus. Hippocampus is not so complicated as that, but it also has areas with specialized functions. So maybe they messed up the identification of the two genes Bgn and Fmod as drivers of the changes; after all, they found almost 1 000 genes altered by fructose. And that mess-up might have been derived by their blind hypothalamic and hippocampal sampling. EDIT: They didn’t mess up,  per se. Turns out there were technical difficulties of extracting enough nucleic acids from specific parts of hypothalamus for analyses. I told you them nuclei are small…

Anyway, the good news comes from the first experiment, where DHA reverses the bad effects of fructose. Yeay! As a side note, the fructose from corn syrup is metabolized differently than the fructose from fruits. So you are far better off consuming the equivalent amount on fructose from a litre of soda in fruits. And DHA comes either manufactured from algae or extracted from cold-water oceanic fish oils (but not farmed fish, apparently).

If anybody that read the paper has some info that can help clarify my “bothers”, please do so in the Comment section below. The other media outlets covering this paper do not mention anything about the knockouts. Thanks! EDIT: The last author of the paper, Dr. Yang, was very kind and clarified a bit of my “bothers” in the Comments section. Thanks again!

Reference: Meng Q, Ying Z, Noble E, Zhao Y, Agrawal R, Mikhail A, Zhuang Y, Tyagi E, Zhang Q, Lee J-H, Morselli M, Orozco L, Guo W, Kilts TM, Zhu J, Zhang B, Pellegrini M, Xiao X, Young MF, Gomez-Pinilla F, Yang X (2016). Systems Nutrigenomics Reveals Brain Gene Networks Linking Metabolic and Brain Disorders. EBioMedicine, doi: 10.1016/j.ebiom.2016.04.008. Article | FREE fulltext PDF | Supplementals | Science Daily cover | NeuroscienceNews cover

By Neuronicus, 24 April 2016

Eating high-fat dairy may lower your risk of being overweight

Many people buy low-fat dairy, like 2% milk, in the hopes that ingesting less fat means that they will be less fattier.

Contrary to this popular belief, a new study found that consumption of high-fat dairy lowers the risk of weight gain by 8% in middle-aged and elderly women.

Rautiainen et al. (2016) studied 18 438 women over 45 years old who did not have cancer, diabetes or cardiovascular diseases. They collected data on the women’s weight, eating habits, smoking, alcohol use, physical activity, medical history, hormone use, and vitamin intake for  8 to 17 years. “Total dairy product intake was calculated by summing intake of low-fat dairy products (skim and low-fat milk, sherbet, yogurt, and cottage and ricotta cheeses) and high-fat dairy products (whole milk, cream, sour cream, ice cream, cream cheese, other cheese, and butter)” (p. 980).

At the beginning of the study, all women included in the analyses were normal weight.

Over the course of the study, all women gained some weight, probably as a result of normal aging.

Women who ate more dairy gained less weight than women who didn’t. This finding is due to the high-fat dairy intake; in other words, women who ate high-fat dairy gained less weight compared to the women who consumed low-fat dairy. Skimmed milk seemed to be the worst for weight gain compared to low-fat yogurt.

I did not notice any speculation as to why this may be the case, so I’ll offer one: maybe the people who eat high-fat dairy get more calories from the same amount of food so maybe they eat less overall.

84 - Copy

Reference: Rautiainen S, Wang L, Lee IM, Manson JE, Buring JE, & Sesso HD (Apr 2016, Epub 24 Feb 2016). Dairy consumption in association with weight change and risk of becoming overweight or obese in middle-aged and older women: a prospective cohort study. The American Journal of Clinical Nutrition, 103(4): 979-988. doi: 10.3945/ajcn.115.118406. Article | FREE FULLTEXT PDF | SuppData

By Neuronicus, 7 April 2016

Cats and uncontrollable bursts of rage in humans

 

That many domestic cats carry the parasite Toxoplasma gondii is no news. Nor is the fact that 30-50% of the global population is infected with it, mainly as a result of contact with cat feces.

The news is that individuals with toxoplasmosis are a lot more likely to have episodes of uncontrollable rage. It was previously known that toxoplasmosis is associated with some psychological disturbances, like personality changes or cognitive impairments. In this new longitudinal study (that means a study that spanned more than a decade) published three days ago, Coccaro et al. (2016) tested 358 adults with or without psychiatric disorders for toxoplasmosis. They also submitted the subjects to a battery of psychological tests for anxiety, impulsivity, aggression, depression, and suicidal behavior.

The results showed that the all the subjects who were infected with T. gondii had higher scores on aggression, regardless of their mental status. Among the people with toxoplasmosis, the aggression scores were highest in the patients previously diagnosed with intermittent explosive disorder,

 

a little lower in patients with non-aggressive psychiatric disorders, and finally lower (but still significantly higher than non-infected people) in healthy people.

The authors are adamant in pointing out that this is a correlational study, therefore no causality direction can be inferred. So don’t kick out you felines just yet. However, as CDC points out, a little more care when changing the cat litter or a little more vigorous washing of the kitchen counters would not hurt anybody and may protect against T. gondii infection.

angry cat - Copy (2)

Reference: Coccaro EF, Lee R, Groer MW, Can A, Coussons-Read M, & Postolache TT (23 march 2016). Toxoplasma gondii Infection: Relationship With Aggression in Psychiatric Subjects. The Journal of Clinical Psychiatry, 77(3): 334-341. doi: 10.4088/JCP.14m09621. Article Abstract | FREE Full Text | The Guardian cover

By Neuronicus, 26 March 2016

Intracranial recordings in human orbitofrontal cortex

How is reward processed in the brain has been of great interest to neuroscience because of the relevance of pleasure (or lack of it) to a plethora of disorders, from addiction to depression. Among the cortical areas (that is the surface of the brain), the most involved structure in reward processing is the orbitofrontal cortex (OFC). Most of the knowledge about the human OFC comes from patients with lesions or from imaging studies. Now, for the first time, we have insights about how and when the OFC processes reward from a group of scientists that studied it up close and personal, by recording directly from those neurons in the living, awake, behaving human.

Li et al. (2016) gained access to six patients who had implanted electrodes to monitor their brain activity before they went into surgery for epilepsy. All patients’ epilepsy foci were elsewhere in the brain, so the authors figured the overall function of OFC is relatively intact.

While recording directly form the OFC the patients performed a probabilistic monetary reward task: on a screen, 5 visually different slot machine appeared and each machine had a different probability of winning 20 Euros (0% chances, 25%, 50%, 75% and 100%), fact that has not been told to the patients. The patients were asked to press a button if a particular slot machine is more likely to give money. Then they would use the slot machine and the outcome (win 20 or 0 Euros) would appear on the screen. The patients figured out quickly which slot machine is which, meaning they ‘guessed’ correctly the probability of being rewarded or not after only 1 to 4 trails (generally, learning is defined in behavioral studies as > 80% correct responses). The researchers also timed the patients during every part of the task.

Not surprisingly, the subjects spent more time deciding whether or not the 50% chance of winning slot machine was a winner or not than in all other 4 possibilities. In other words, the more riskier the choice, the slower the time reaction to make that choice.

The design of the task allowed the researchers to observe three 3 phases which were linked with 3 different signals in the OFC:

1) the expected value phase where the subjects saw the slot machine and made their judgement. The corresponding signal showed an increase in the neurons’ firing about 400 ms after the slot machine appeared on the screen in moth medial and lateral OFC.

2) the risk or uncertainty phase, when subjects where waiting for the slot machine to stop its spinners and show whether they won or not (1000-1500 ms). They called the risk phase because both medial and lateral OFC had the higher responses when there was presented the riskiest probability, i.e. 50% chance. Unexpectedly, the OFC did not distinguish between the winning and the non-wining outcomes at this phase.

3) the experienced value or outcome phase when the subjects found out whether they won or not. Only the lateral OFC responded during this phase, that is immediately upon finding if the action was rewarded or not.

For the professional interested in precise anatomy, the article provides a nicely detailed diagram with the locations of the electrodes in Fig. 6.

The paper is also covered for the neuroscientists’ interest (that is, is full of scientific jargon) by Kringelbach in the same Journal, a prominent neuroscientist mostly known for his work in affective neuroscience and OFC. One of the reasons I also covered this paper is that both its full text and Kringelbach’s commentary are behind a paywall, so I am giving you a preview of the paper in case you don’t have access to it.

81 ofc - Copy

Reference: Li Y, Vanni-Mercier G, Isnard J, Mauguière F & Dreher J-C (1 Apr 2016, Epub 25 Jan 2016). The neural dynamics of reward value and risk coding in the human orbitofrontal cortex. Brain, 139(4):1295-1309. DOI: http://dx.doi.org/10.1093/brain/awv409. Article

By Neuronicus, 25 March 2016

Now, isn’t that sweet?

80sugar - Copy

When I opened one of my social media pages today, I saw a message from a friend of mine which was urging people to not believe everything they read, particularly when it comes to issues like safety and health. Instead, one should go directly at the original research articles on a particular issue. In case the reader is not familiar with the scientific jargon, the message was accompanied by one of the many very useful links to blogs that teach a non-scientist how to cleverly read a scientific paper without any specific science training.

Needless to say, I had to spread the message, as I believe in it wholeheartedly. All good and well, but what happens when you encounter two research papers with drastically opposite views on the same topic? What do you do then? Who do you believe?

So I thought pertinent to tell you my short experience with one of these issues and see if we can find a way out of this conundrum. A few days ago, the British Chancellor of the Exchequer (the rough equivalent of a Secretary of the Treasury or Minister of Finance in other countries) announced the introduction of a new tax on sugary drinks: the more sugar a company puts in its drinks, the more taxes it would pay. In his speech announcing the law, Mr. George Osborne was saying that the reason for this law is that there is a positive association between sugar consumption and obesity, meaning the more sugar you eat, the fatter you get. Naturally, he did not cite any studies (he would be a really odd politician if he did so).

Therefore, I started looking for these studies. As a scientist, but not a specialist in nutrition, the first thing I did was searching for reviews on the association between sugar consumption and obesity on peer-reviewed databases (like the Nature journals, the US NIH Library of Medicine, and the Stanford Search Engine). My next step would have been skimming a handful of reviews and then look at their references and select some dozens or so of research papers and read those. But I didn’t get that far and here is why.

At first glance (that is, skimming about a hundred abstracts or so), it seems there are overwhelmingly more papers out there that say there is a positive correlation between sugar intake and obesity in both children and adults. But, when looking at reviews, there are plenty of reviews on both sides of the issue! Usually, the reviews tend to reflect the compounded data, that’s what they are for and that’s why is a good idea to start with a review on a subject, if one knows nothing about it. So this dissociation between research data and reviews seemed suspicious. Among the reviews in question, the ones that seemed more systematic than others are this one and this one, with obvious opposite conclusions.

And then, instead of going for the original research and leave the reviews alone, I did something I am trying like hell not to do: I looked the authors and their affiliations up. Those who follow my blog might have noticed that very rarely do I mention where the research has taken place and, except in the Reference section, I almost never mention the name of the journal where the research was published in the main body of the text. And I do this quite intentionally as I am trying – and urge the readers to do the same thing – to not judge the book by the cover. That is, not forming a priori expectations based on the fame/prestige (or lack thereof) of the institution or journal in which the research was conducted and published, respectively. Judge the work by its value, not by its authors; and this paid off many times during my career, as I have seen crappy-crappity-crap papers published in Nature or Science, bloopers of cosmic proportions coming from NASA (see arsenic-DNA incorporation), or really big names screwing up big time. On the other hand, I have seen some quite interesting work, admittedly rare, done in Thailand, Morocco or other countries not known for their expensive research facilities.

But even in research the old dictum “follow the money” is, unfortunately, valid. Because a quick search showed that most of the nay-sayers (i.e. sugar does not cause weight gain) were 1) from USA and 2) had been funded by the food and beverage industry. Luckily for everybody, enter the scene: Canada. Leave it for the Canadians to set things straight. In other words, a true rara avis poked its head amidst this controversy: a meta-review. Lo and behold – a review of reviews! Massougbodji et al. (2014) found all sorts of things, from the lack of consensus on the strength of the evidence on causality to the quality of these reviews. But the one finding that was interesting to me was:

“reviews funded by the industry were less likely to conclude that there was a strong association between sugar-sweetened beverages consumption and obesity/weight gain” (p. 1103).

In conclusion, I would add a morsel of advice to my friend’s message: in addition to looking up the original research on a topic, also look where the money funding that research is coming from. Money with no strings attached usually comes only from governments. Usually is the word, there may be exceptions, I am sure I am not well-versed in the behind-the-scenes money politics. But if you see Marlboro paying for “research” that says smoking is not causing lung cancer or the American Beverage Association funding studies to establish daily intake limits for high-fructose corn syrup, for sure you should cock an eyebrow before reading further.

Reference: Massougbodji J, Le Bodo Y, Fratu R, & De Wals P (2014). Reviews examining sugar-sweetened beverages and body weight: correlates of their quality and conclusions. The American Journal of Clinical Nutrition, 99:1096–1104. doi: 10.3945/ajcn.113.063776. Article | FREE PDF

By Neuronicus, 20 March 2016

Younger children in a grade are more likely to be diagnosed with ADHD

AHDH immaturity - Copy.jpgA few weeks ago I was drawing attention to the fact that some children diagnosed with ADHD do not have attention deficits. Instead, a natural propensity for seeking more stimulation may have led to overdiagnosing and overmedicating these kids.

Another reason for the dramatic increase in ADHD diagnosis over the past couple of decades may stem in the increasingly age-inappropriate demands that we place on children. Namely, children in the same grade can be as much as 1 year apart in chronological age, but at these young ages 1 year means quite a lot in terms of cognitive and behavioral development. So if we put a standard of expectations based on how the older children behave, then the younger children in the same grade would fall short of these standards simply because they are too immature to live up to them.

So what does the data say? Two studies, Morrow et al. (2012) and Chen et al. (2016) checked to see if the younger children in a given grade are more likely to be diagnosed with ADHD and/or medicated. The first study was conducted in almost 1 million Canadian children, aged 6-12 years and the second investigated almost 400,000 Taiwanese children, aged 4-17 years.

In Canada, the cut-off for starting school in Dec. 31. Which means that in the first grade, a child born in January is almost a year older that a child born in December. Morrow et al. (2012) concluded that the children born in December were significantly more likely to receive a diagnosis of ADHD than those born in January (30% more likely for boys and 70% for girls). Moreover, the children born in December were more likely to be given an ADHD medication prescription (41% more likely for boys and 77% for girls).

In Taiwan, the cut-off date for starting school in August 31. Similar to the Canadian study, Chen et al. (2016) found that the children born in August were more likely to be diagnosed with ADHD and receive ADHD medication than the children born in September.

Now let’s be clear on one thing: ADHD is no trivial matter. It is a real disorder. It’s an incredibly debilitating disease for both children and their parents. Impulsivity, inattention and hyperactivity are the hallmarks of almost every activity the child engages in, leading to very poor school performance (the majority cannot get a college degree) and hard family life, plus a lifetime of stigma that brings its own “gifts” such as marginalization, loneliness, depression, anxiety, poor eating habits, etc.

The data presented above favors the “immaturity hypothesis” which posits that the behaviors expected out of some children cannot be performed not because something is wrong with them, but because they are simply too immature to be able to perform those behaviors. That does not mean that every child diagnosed with ADHD will just grow out of it; the researchers just point to the fact that ignoring the chronological age of the child coupled with prematurely entering a highly stressful and demanding system as school might lead to ADHD overdiagnosis.

Bottom line: ignoring the chronological age of the child might explain some of increase in prevalence of ADHD by overdiagnostication (in US alone, the rise is from 6% of children diagnosed with ADHD in 2000 to 11-15% in 2015).

References:

  1. Morrow RL, Garland EJ, Wright JM, Maclure M, Taylor S, & Dormuth CR. (17 Apr 2012, Epub 5 Mar 2012). Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children. Canadian Medical Association Journal, 184 (7), 755-762, doi: 10.1503/cmaj.111619. Article | FREE PDF 
  1. Chen M-H, Lan W-H, Bai Y-M, Huang K-L, Su T-P, Tsai S-J, Li C-T, Lin W-C, Chang W-H, & Pan T-L, Chen T-J, & Hsu J-W. (10 Mar 2016). Influence of Relative Age on Diagnosis and Treatment of Attention-Deficit Hyperactivity Disorder in Taiwanese Children. The Journal of Pediatrics [Epub ahead print]. DOI: http://dx.doi.org/10.1016/j.jpeds.2016.02.012 Article | FREE PDF

By Neuronicus, 14 March 2016

Not all children diagnosed with ADHD have attention deficits

Given the alarming increase in the diagnosis of attention deficit/hyperactivity disorder (ADHD) over the last 20 years, I thought pertinent to feature today an older paper, from the year 2000.

Dopamine, one of the chemicals that the neurons use to communicate, has been heavily implicated in ADHD. So heavily in fact that Ritalin, the main drug used for the treatment of ADHD, has its main effects by boosting the amount of dopamine in the brain.

Swanson et al. (2000) reasoned that people with a particular genetic abnormality that makes their dopamine receptors work less optimally may have more chances to have ADHD. The specialist reader may want to know that the genetic abnormality in question refers to a 7-repeat allele of a 48-bp variable number of tandem repeats in exon 3 of the dopamine receptor number 4 located on chromosome 11, whose expression results in a weaker dopamine receptor. We’ll call it DRD4,7-present as opposed to DRD4,7-absent (i.e. people without this genetic abnormality).

They had access to 96 children diagnosed with ADHD after the diagnostic criteria of DSM-IV and 48 matched controls (children of the same gender, age, school affiliation, socio-economic status etc. but without ADHD). About half of the children diagnosed with ADHD had the DRD4,7-present.

The authors tested the children on 3 tasks:

(i) a color-word task to probe the executive function network linked to anterior cingulate brain regions and to conflict resolution;
(ii) a cued-detection task to probe the orienting and alerting networks linked to posterior parietal and frontal brain regions and to shifting and maintenance of attention; and
(iii) a go-change task to probe the alerting network (and the ability to initiate a series of rapid response in a choice reaction time task), as well as the executive network (and the ability to inhibit a response and re-engage to make another response) (p. 4756).

Invalidating the authors’ hypothesis, the results showed that the controls and the DRD4,7-present had similar performance at these tasks, in contrast to the DRD4,7-absent who showed “clear abnormalities in performance on these neuropsychological tests of attention” (p. 4757).

This means two things:
1) Half of the children diagnosed with ADHD did not have an attention deficit.
2) These same children had the DRD4,7-present genetic abnormality, which has been previously linked with novelty seeking and risky behaviors. So it may be just possible that these children do not suffer from ADHD, but “may be easily bored in the absence of highly stimulating conditions, may show delay aversion and choose to avoid waiting, may have a style difference that is adaptive in some situations, and may benefit from high activity levels during childhood” (p. 4758).

Great paper and highly influential. The last author of the article (meaning the chief of the laboratory) is none other that Michael I. Posner, whose attentional networks, models, and tests feature every psychology and neuroscience textbook. If he doesn’t know about attention, then I don’t know who is.

One of the reasons I chose this paper is because it seems to me that a lot of teachers, nurses, social workers, or even pediatricians feel qualified to scare the living life out of parents by suggesting that their unruly child may have ADHD. In deference to most form the above-mentioned professions, the majority of people recognize their limits and tell the concerned parents to have the child tested by a qualified psychologist. And, unfortunately, even that may result in dosing your child with Ritalin needlessly when the child’s propensity toward a sensation-seeking temperament and extravert personality, may instead require a different approach to learning with a higher level of stimulation (after all, the children form the above study had been diagnosed by qualified people using their latest diagnosis manual).

Bottom line: beware of any psychologist or psychiatrist who does not employ a battery of attention tests when diagnosing your child with ADHD.

93 adhd - Copy

Reference: Swanson J, Oosterlaan J, Murias M, Schuck S, Flodman P, Spence MA, Wasdell M, Ding Y, Chi HC, Smith M, Mann M, Carlson C, Kennedy JL, Sergeant JA, Leung P, Zhang YP, Sadeh A, Chen C, Whalen CK, Babb KA, Moyzis R, & Posner MI. (25 April 2000). Attention deficit/hyperactivity disorder children with a 7-repeat allele of the dopamine receptor D4 gene have extreme behavior but normal performance on critical neuropsychological tests of attention. Proceedings of the National Academy of Sciences of the United States of America, 97(9):4754-4759. doi: 10.1073/pnas.080070897. Article | FREE FULLTEXT PDF

P.S. If you think that “weeell, this research happened 16 years ago, surely something came out of it” then think again. The newer DSM-V’s criteria for diagnosis are likely to cause an increase in the prevalence of diagnosis of ADHD.

By Neuronicus, 26 February 2016

Autism cure by gene therapy

shank3 - Copy

Nothing short of an autism cure is promised by this hot new research paper.

Among many thousands of proteins that a neuron needs to make in order to function properly there is one called SHANK3 made from the gene shank3. (Note the customary writing: by consensus, a gene’s name is written using small caps and italicized, whereas the protein’s name that results from that gene expression is written with caps).

This protein is important for the correct assembly of synapses and previous work has shown that if you delete its gene in mice they show autistic-like behavior. Similarly, some people with autism, but by far not all, have a deletion on Chromosome 22, where the protein’s gene is located.

The straightforward approach would be to restore the protein production into the adult autistic mouse and see what happens. Well, one problem with that is keeping the concentration of the protein at the optimum level, because if the mouse makes too much of it, then the mouse develops ADHD and bipolar.

So the researchers developed a really neat genetic model in which they managed to turn on and off the shank3 gene at will by giving the mouse a drug called tamoxifen (don’t take this drug for autism! Beside the fact that is not going to work because you’re not a genetically engineered mouse with a Cre-dependent genetic switch on your shank3, it is also very toxic and used only in some form of cancers when is believed that the benefits outweigh the horrible side effects).

In young adult mice, the turning on of the gene resulted in normalization of synapses in the striatum, a brain region heavily involved in autistic behaviors. The synapses were comparable to normal synapses in some aspects (from the looks, i.e. postsynaptic density scaffolding, to the works, i.e. electrophysiological properties) and even more so in others (more dendritic spines than normal, meaning more synapses, presumably). This molecular repair has been mirrored by some behavioral rescue: although these mice still had more anxiety and more coordination problems than the control mice, their social aversion and repetitive behaviors disappeared. And the really really cool part of all this is that this reversal of autistic behaviors was done in ADULT mice.

Now, when the researchers turned the gene on in 20 days old mice (which is, roughly, the equivalent of the entering the toddling stage in humans), all four behaviors were rescued: social aversion, repetitive, coordination, and anxiety. Which tells us two things: first, the younger you intervene, the more improvements you get and, second and equally important, in adult, while some circuits seem to be irreversibly developed in a certain way, some other neural pathways are still plastic enough as to be amenable to change.

Awesome, awesome, awesome. Even if only a very small portion of people with autism have this genetic problem (about 1%), even if autism spectrum disorders encompass such a variety of behavioral abnormalities, this research may spark hope for a whole range of targeted gene therapies.

Reference: Mei Y, Monteiro P, Zhou Y, Kim JA, Gao X, Fu Z, Feng G. (Epub 17 Feb 2016). Adult restoration of Shank3 expression rescues selective autistic-like phenotypes. Nature. doi: 10.1038/nature16971. Article | MIT press release

By Neuronicus, 19 February 2016

Save

CCL11 found in aged but not young blood inhibits adult neurogenesis

vil - Copy
Portion of Fig. 1 from Villeda et al. (2011, doi: 10.1038/nature10357) describing the parabiosis procedure. Basically, under complete anesthesia, the peritoneal membranes and the skins of the two mice were sutured together. The young mice were 3–4 months (yellow) and old mice were 18–20 months old (grey).

My last post was about parabiosis and its sparse revival as a technique in physiology experiments. Parabiosis is the surgical procedure that joins two living animals allowing them to share their circulatory systems. Here is an interesting paper that used the method to tackle blood’s contribution to neurogenesis.

Adult neurogenesis, that is the birth of new neurons in the adult brain, declines with age. This neurogenesis has been observed in some, but not all brain regions, called neurogenic niches.

Because these niches occur in blood-rich areas of the brain, Villeda et al. (2011) wondered if, in addition with the traditional factors required for neurogenesis like enrichment or running, blood factors may also have something to do with neurogenesis. The authors made a young and an old mouse to share their blood via parabiosis (see pic.).

Five weeks after the parabiosis procedure, the young mouse had decreased neurogenesis and the old mouse had increased neurogenesis compared to age-matched controls. To make sure their results are due to something in the blood, they injected plasma from an old mouse into a young mouse and that also resulted in reduced neurogenesis. Moreover, the reduced neurogenesis was correlated with impaired learning as shown by electrophysiological recordings from the hippocampus and from behavioral fear conditioning.

So what in the blood does it? The authors looked at 66 proteins found in the blood (I don’t know the blood make-up, so I can’t tell if 66 is a lot or not ) and noticed that 6 of these had increased levels in the blood of ageing mice whether linked by parabiosis or not. Out of these six, the authors focus on CCL11 (unclear to me why that one, my bet is that they tried the others too but didn’t have enough data). CCLL11 is a small signaling protein involved in allergies. So the authors injected it into young mice and Lo and Behold! there was decreased neurogenesis in their hippocampus. Maybe the vampires were onto something, whadda ya know? Just kidding… don’t go around sucking young people’s blood!

This paper covers a lot of work and, correspondingly, has no less than 23 authors and almost 20 Mb of supplemental documents! The story it tells is very interesting and as complete as it gets, covering many aspects of the problems investigated and many techniques to address those problems. Good read.

Reference: Villeda SA, Luo J, Mosher KI, Zou B, Britschgi M, Bieri G, Stan TM, Fainberg N, Ding Z, Eggel A, Lucin KM, Czirr E, Park JS, Couillard-Després S, Aigner L, Li G, Peskind ER, Kaye JA, Quinn JF, Galasko DR, Xie XS, Rando TA, Wyss-Coray T. (31 Aug 2011). The ageing systemic milieu negatively regulates neurogenesis and cognitive function. Nature. 477(7362):90-94. doi: 10.1038/nature10357. Article | FREE Fulltext PDF

By Neuronicus, 6 January 2016

I am blind, but my other personality can see

58depression-388872_960_720

This is a truly bizarre report.

A woman named BT suffered an accident when she was 20 years old and she became blind. Thirteen year later she was referred to Bruno Waldvogel (one of the two authors of the paper) for psychotherapy by a psychiatry clinic who diagnosed her with dissociative identity disorder, formerly known as multiple personality disorder.

The cortical blindness diagnosis has been established after extensive ophtalmologic tests in which she appeared blind but not because of damage to the eyes. So, by inference, it had to be damage to the brain. Remarkably (we shall see later why), she had no oculomotor reflexes in response to glare. Moreover, visual evoked potentials (VEP is an EEG in the occipital region) showed no activity in the primary visual area of the brain (V1).

During the four years of psychotherapy, BT showed more than 10 distinct personalities. One of them, a teenage male, started to see words on a magazine and pretty soon could see everything. With the help of hypnotherapeutic techniques, more and more personalities started to see.

“Sighted and blind states could alternate within seconds” (Strasburger & Waldvogel, 2015).

The VEP showed no or very little activity when the blind personality was “on” and showed normal activity when the sighted personality was “on”. Which is extremely curious, because similar studies in people with psychogenic blindness or anesthetized showed intact VEPs.

There are a couple of conclusions from this: 1) BT was misdiagnosed, as is unlikely to be any brain damage because some personalities could see, and 2) Multiple personalities – or dissociate identities, as they are now called – are real in the sense that they can be separated at the biological level.

BEAR_10_04
The visual pathway that mediates conscious visual perception. a) A side view of the human brain with the retinogeniculocortical pathway shown inside (blue). b) A horizontal section through the brain exposing the same pathway.

Fascinating! The next question is, obviously, what’s the mechanism behind this? The authors say that it’s very likely the LGN (the lateral geniculate nucleus of the thalamus) which is the only relay between retina and V1 (see pic). It can be. Surely is possible. Unfortunately, so are other putative mechanisms, as 10% of the neurons in the retina also go to the superior colliculus, and some others go directly to the hypothalamus, completely bypassing the thalamus. Also, because it is impossible to have a precise timing on the switching between personalities, even if you MRI the woman it would be difficult to establish if the switching to blindness mode is the result of a bottom-up or a top-down modulation (i.e. the visual information never reaches V1, it reaches V1 and is suppressed there, or some signal form other brain areas inhibits V1 completely, so is unresponsive when the visual information arrives).

Despite the limitations, I would certainly try to get the woman into an fMRI. C’mon, people, this is an extraordinary subject and if she gave permission for the case study report, surely she would not object to the scanning.

Reference: Strasburger H & Waldvogel B (Epub 15 Oct 2015). Sight and blindness in the same person: Gating in the visual system. PsyCh Journal. doi: 10.1002/pchj.109.  Article | FULLTEXT PDF | Washington Post cover

By Neuronicus, 29 November 2015

Pesticides reduce pollination

2410-closeup-of-a-bee-with-pollen-flying-by-a-flower-pv
Close-up of a bee with pollen flying by a flower. Credit: Jon Sullivan. License: PD

Bees have difficult times these days, what with that mysterious colony collapse disorder on top of various viral, bacterial and parasitical diseases. Of course, the widespread use of pesticides did not help the thriving of the hive, as many pesticides have various deleterious effects on the bees, from poor foraging or less reproduction to even death.

The relatively new (’90s) class of insecticide – the neonicotinoids – has been met with great hope because has low toxic effects on birds and mammals, as opposed to the organophosphates, for example. Why that should be the case, is a mystery for me, because the neonicotinoids bind to the nicotinic receptors present in both peripheral and central nervous system in an irreversible manner, which does not put the neonicotinoids in a favorable light.

Now Stanley et al. (2015) have found that exposure to the neonicotinoid thiamethoxam reduces the pollination provided by the bumblebees to apples. They checked it using 24 bumblebee colonies and the exposure was at low levels over 13 days, trying to mimic realistic in-field exposure. The apples visited by the bumblebees exposed to insecticide had 36% reduction in apple seeds.

Almost 90% of the flowering plants need pollination to reproduce, so any threat to pollination can cause serious problems. Over the paste few years, virtually all USA corn had been treated with neonicotinoids; EU banned the thiamethoxam use in 2013. And, to make matters worse, neonicotinoids are but only one class of the many toxins affecting the bees.

Related post: Golf & Grapes OR Grandkids (but not both!)

Reference: Stanley DA, Garratt MP, Wickens JB, Wickens VJ, Potts SG, & Raine NE. (Epub 18 Nov 2015). Neonicotinoid pesticide exposure impairs crop pollination services provided by bumblebees. Nature, doi: 10.1038/nature16167. Article

By Neuronicus, 21 November 2015

Will you trust a pigeon pathologist? That’s right, he’s a bird. Stop being such an avesophobe!

pigeon

From Levenson et al. (2015), doi: 10.1371/journal.pone.0141357. License: CC BY 4.0

Pigeons have amazing visual skills. They can remember facial expressions, recall almost 2000 images, recognizes all the letters of the alphabet (well, even I can do that), and even tell apart a Monet form a Picasso! (ok, birdie, you got me on that one).

Given their visual prowess, Levenson et al. (2015) figured that pigeons might be able to distinguish medically-relevant images (a bit of a big step in reasoning there, but let’s go with it). They got a few dozen pigeons, starved them a bit so the birds show motivation to work for food, and started training them on recognizing malignant versus non-malignant breast tumors histology pictures. These are the same exact pictures your radiologist looks at after a mammogram and your pathologist after a breast biopsy; they were not retouched in any way for the pigeon’s benefit (except to make it more difficult, see below). Every time the pigeon pecked on the correct image, it got a morsel of food (see picture). Training continued for a few weeks on over 100 images.

For biopsies, the birds had an overwhelming performance, reaching 99% accuracy, regardless of the magnification of the picture, and for mammograms, up to 80% accuracy, just like their human counterparts. Modifying the pictures’ attributes, like rotation, compression or color lowered somewhat their accuracy, but they were still able to score only marginally less than humans and considerably better than any computer software. More importantly, the pigeons were able to generalize, after training, to correctly classify previously unseen pictures.

Let’s be clear: I’m not talking about some fancy breed here, but your common beady-eyed, suspicious-sidling, feral-looking rock pigeon. Yes, the one and only pest that receives stones and bread in equal measures, the former usually accompanied by vicious swearings uttered by those that encountered their slushy “gifts” under the shoes, on the windshield or in the coffee and the latter offered by more kindly disposed and yet utterly naive individuals in the misguided hopes of befriending nature. Columba livia by its scientific name, at the same time an exasperating pest and an excellent pathologist! Who knew?!

The authors even suggest using pigeons instead of training and paying clinicians. Hmmm… But who do I sue if my mother’s breast cancer gets missed by the bird, in one of those 1% chances? Because somehow making a pigeon face the guillotine does not seem like justice to me. Or is this yet another plot to get the clinicians off the hook for misdiagnoses? Leave the medical profession alone, birdies – is morally sensitive as it is -, and search employment in the police or Google; they always need better performance in the ever-challenging task of face-recognition in surveillance videos.

P.S. The reason why you didn’t recognized the word “avesophobe” in the title is because I just invented it, to distinguish the hate for birds from a more serious affliction, ornithophobia, the fear of birds.

Reference: Levenson RM, Krupinski EA, Navarro VM, & Wasserman EA (18 Nov 2015). Pigeons (Columba livia) as Trainable Observers of Pathology and Radiology Breast Cancer Images. PLoS One, 10(11):e0141357. doi: 10.1371/journal.pone.0141357.  Article | FREE FULLTEXT PDF

By Neuronicus, 19 November 2015