Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials that, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it aint’s that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Advertisements

Aging and its 11 hippocampal genes

Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.

First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.

Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.

After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.

But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.

112hc-copy

These are the 11 genes: C3 (up), Cd74  (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.

I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.

Reference: Pardo J, Abba MC, Lacunza E, Francelle L, Morel GR, Outeiro TF, Goya RG. (13 Jan 2017, Epub ahead of print). Identification of a conserved gene signature associated with an exacerbated inflammatory environment in the hippocampus of aging rats. Hippocampus, doi: 10.1002/hipo.22703. ARTICLE

By Neuronicus, 25 January 2017

Save

Save

Soccer and brain jiggling

There is no news or surprise that strong hits to the head produce transient or permanent brain damage. But how about mild hits produced by light objects like, say, a volley ball or soccer ball?

During a game of soccer, a player is allowed to touch the ball with any part of his/her body minus the hands. Therefore, hitting the ball with the head, a.k.a. soccer heading, is a legal move and goals marked through such a move are thought to be most spectacular by the refined connoisseur.

A year back, in 2015, the United States Soccer Federation forbade the heading of the ball by children 10 years old and younger after a class-action lawsuit against them. There has been some data that soccer players display loss of brain matter that is associated with cognitive impairment, but such studies were correlational in nature.

Now, Di Virgilio et al. (2016) conducted a study designed to explore the consequences of soccer heading in more detail. They recruited 19 young amateur soccer players, mostly male, who were instructed to perform 20 rotational headings as if responding to corner kicks in a game. The ball was delivered by a machine at a speed of approximately 38 kph. The mean force of impact for the group was 13.1 ± 1.9 g. Immediately after the heading session and at 24 h, 48 h and 2 weeks post-heading, the authors performed a series of tests, among which are a transcranial magnetic stimulation (TMS) recording, a cognitive function assessment (by using the Cambridge Neuropsychological Test Automated Battery), and a postural control test.

Not being a TMS expert myself, I was wondering how do you record with a stimulator? TMS stimulates, it doesn’t measure anything. Or so I thought. The authors delivered brief  (1 ms) stimulating impulses to the brain area that controls the leg (primary motor cortex). Then they placed an electrode over the said muscle (rectus femoris or quadriceps femoris) and recorded how the muscle responded. Pretty neat. Moreover, the authors believe that they can make inferences about levels of inhibitory chemicals in the brain from the way the muscle responds. Namely, if the muscle is sluggish in responding to stimulation, then the brain released an inhibitory chemical, like GABA (gamma-amino butyric acid), hence calling this process corticomotor inhibition. Personally, I find this GABA inference a bit of a leap of faith, but, like I said, I am not fully versed in TMS studies so it may be well documented. Whether or not GABA is responsible for the muscle sluggishness, one thing is well documented though: this sluggishness is the most consistent finding in concussions.

The subjects had impaired short term and long term memory functions immediately after the ball heading, but not 24 h or more later. Also transient was the corticomotor inhibition. In other words, soccer ball heading results in measurable changes in brain function. Changes for the worst.

Even if these changes are transient, there is no knowing (as of yet) what prolonged ball heading might do. There is ample evidence that successive concussions have devastating effects on the brain. Granted, soccer heading does not produce concussions, at least in this paper’s setting, but I cannot think that even sub-concussion intensity brain disruption can be good for you.

On a lighter note, although the title of the paper features the word “soccer”, the rest o the paper refers to the game as “football”. I’ll let you guess the authors’ nationality or at least the continent of provenance ;).

109-football-copy

Reference: Di Virgilio TG, Hunter A, Wilson L, Stewart W, Goodall S, Howatson G, Donaldson DI, & Ietswaart M. (Nov 2016, Epub 23 Oct 2016). Evidence for Acute Electrophysiological and Cognitive Changes Following Routine Soccer Heading. EBioMedicine, 13:66-71. PMID: 27789273, DOI: 10.1016/j.ebiom.2016.10.029. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 20 December 2016

Amusia and stroke

Although a complete musical anti-talent myself, that doesn’t prohibit me from fully enjoying the works of the masters in the art. When my family is out of earshot, I even bellow – because it cannot be called music – from the top of my lungs alongside the most famous tenors ever recorded. A couple of days ago I loaded one of my most eclectic playlists. While remembering my younger days as an Iron Maiden concert goer (I never said I listen only to classical music :D) and screaming the “Fear of the Dark” chorus, I wondered what’s new on the front of music processing in the brain.

And I found an interesting recent paper about amusia. Amusia is, as those of you with ancient Greek proclivities might have surmised, a deficit in the perception of music, mainly the pitch but sometimes rhythm and other aspects of music. A small percentage of the population is born with it, but a whooping 35 to 69% of stroke survivors exhibit the disorder.

So Sihvonen et al. (2016) decided to take a closer look at this phenomenon with the help of 77 stroke patients. These patients had an MRI scan within the first 3 weeks following stroke and another one 6 months poststroke. They also completed a behavioral test for amusia within the first 3 weeks following stroke and again 3 months later. For reasons undisclosed, and thus raising my eyebrows, the behavioral assessment was not performed at 6 months poststroke, nor an MRI at the 3 months follow-up. It would be nice to have had behavioral assessment with brain images at the same time because a lot can happen in weeks, let alone months after a stroke.

Nevertheless, the authors used a novel way to look at the brain pictures, called voxel-based lesion-symptom mapping (VLSM). Well, is not really novel, it’s been around for 15 years or so. Basically, to ascertain the function of a brain region, researchers either get people with a specific brain lesion and then look for a behavioral deficit or get a symptom and then they look for a brain lesion. Both approaches have distinct advantages but also disadvantages (see Bates et al., 2003). To overcome the disadvantages of these methods, enter the scene VLSM, which is a mathematical/statistical gimmick that allows you to explore the relationship between brain and function without forming preconceived ideas, i.e. without forcing dichotomous categories. They also looked at voxel-based morphometry (VBM), which a fancy way of saying they looked to see if the grey and white matter differ over time in the brains of their subjects.

After much analyses, Sihvonen et al. (2016) conclude that the damage to the right hemisphere is more likely conducive to amusia, as opposed to aphasia which is due mainly to damage to the left hemisphere. More specifically,

“damage to the right temporal areas, insula, and putamen forms the crucial neural substrate for acquired amusia after stroke. Persistent amusia is associated with further [grey matter] atrophy in the right superior temporal gyrus (STG) and middle temporal gyrus (MTG), locating more anteriorly for rhythm amusia and more posteriorly for pitch amusia.”

The more we know, the better chances we have to improve treatments for people.

104-copy

unless you’re left-handed, then things are reversed.

References:

1. Sihvonen AJ, Ripollés P, Leo V, Rodríguez-Fornells A, Soinila S, & Särkämö T. (24 Aug 2016). Neural Basis of Acquired Amusia and Its Recovery after Stroke. Journal of Neuroscience, 36(34):8872-8881. PMID: 27559169, DOI: 10.1523/JNEUROSCI.0709-16.2016. ARTICLE  | FULLTEXT PDF

2.Bates E, Wilson SM, Saygin AP, Dick F, Sereno MI, Knight RT, & Dronkers NF (May 2003). Voxel-based lesion-symptom mapping. Nature Neuroscience, 6(5):448-50. PMID: 12704393, DOI: 10.1038/nn1050. ARTICLE

By Neuronicus, 9 November 2016

Save

Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.

sparx1-copy

Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

Drink before sleep

Among the many humorous sayings, puns, and jokes that one inevitably encounters on any social medium account, one that was popular this year was about the similarity between putting a 2 year old to bed and putting your drunk friend to bed, which went like this: they both sing to themselves, request water, mumble and blabber incoherently, do some weird yoga posses, cry, hiccup, and then they pass out. The joke manages to steal a smile only if someone has been through both situations, otherwise it looses its appeal.

Being exposed to both situations, I thought that while the water request from the drunk friend is a response to the dehydrating effects of alcohol, the water request from the toddler is probably nothing more than a delaying tactic to postpone bedtime. Whether there may or may not be some truth to my assumption in the case of the toddler, here is a paper to show that there is definitely more to the water request than meets the eye.

Generally, thirst is generated by the hypothalamus when its neurons and neurons from organum vasculosum lamina terminalis (OVLT) in the brainstem sense that the blood is either too viscous (hypovolaemia) or too salty (hyperosmolality), both phenomena indicating a need for water. Ingesting water would bring these indices to homeostatic values.

More than a decade ago, researchers observed that rodents get a good gulp of water just before going to sleep. This surge was not motivated by thirst because the mice were not feverish, were not hungry and they did not have a too viscous or a too salty blood. So why do it then? If the rodents are restricted from drinking the water they get dehydrated, so obviously the behavior has function. But is not motivated by thirst, at least not the way we know it. Huh… The authors call this “anticipatory thirst”, because it keeps the animal from becoming dehydrated later on.

Since the behavior occurs with regularity, maybe the neurons that control circadian rhythms have something to do with it. So Gizowski et al. (2016) took a closer look at  the activity of clock neurons from the suprachiasmatic nucleus (SCN), a well known hypothalamic nucleus heavily involved in circadian rhythms. The authors did a lot of work on SCN and OVLT neurons: fluorescent labeling, c-fos expression, anatomical tracing, optogenetics, genetic knockouts, pharmacological manipulations, electrophysiological  recordings, and behavioral experiments. All these to come to this conclusion:

SCN neurons release vasopressin and that excites the OVLT neurons via V1a receptors. This is necessary and sufficient to make the animal drink the water, even if it’s not thirsty.

That’s a lot of techniques used in a lot of experiments for only three authors. Ten years ago, you needed only one, maybe two techniques to prove the same point. Either there have been a lot of students and technicians who did not get credit (there isn’t even an Acknowledgements section. EDIT: yes, there is, see the comments below or, if they’re missing, the P.S.) or these three authors are experts in all these techniques. In this day and age, I wouldn’t be surprised by either option. No wonder small universities have difficulty publishing in Big Name journals; they don’t have the resources to compete. And without publishing, no tenure… And without tenure, less research… And thus shall the gap widen.

Musings about workload aside, this is a great paper, shedding light on yet another mysterious behavior and elucidating the mechanism behind it. There’s still work to be done though, like answering how accurate is the SCN in predicting bedtime to activate the drinking behavior. Does it take its cues from light only? Does ambient temperature play a role and so on. This line of work can help people that work in shifts to prevent certain health problems. Their SCN is out of rhythm and that can influence deleteriously the activity of a whole slew of organs.

scn-h2o-copy
Summary of the doi: 10.1038/nature19756 findings. 1) The light is a cue for suprachiasmatic nulceus (SCN) that bedtime is near. 2) The SCN vasopressin neurons that project to organum vasculosum lamina terminalis (OVLT) are activated. 3) The OVLT generates the anticipatory thirst. 4) The animal drinks fluids.

Reference: Gizowski C, Zaelzer C, Bourque CW. (28 Sep 2016). Clock-driven vasopressin neurotransmission mediates anticipatory thirst prior to sleep. Nature, 537(7622): 685-688. PMID: 27680940. DOI: 10.1038/nature19756. ARTICLE

By Neuronicus, 5 October 2016

EDIT (12 Oct 2016): P.S. The blog comments are automatically deleted after a period of time. In case of this post that would be a pity because I have been fortunate to receive comments from at least one of the authors of the paper, the PI, Dr. Charles Bourque and, presumably under pseudonym, but I don’t know that for sure, also the first author, Claire Gizowski. So I will include here, in a post scriptum, the main idea of their comments. Here is an excerpt from Dr. Bourque’s comment:

“Let me state for the record that Claire accomplished pretty much ALL of the work in this paper (there is a description of who did what at the end of the paper). More importantly, there were no “unthanked” undergraduates, volunteers or other parties that contributed to this work.”

My hat, Ms. Gizowski. It is tipped. To you. Congratulations! With such an impressive work I am sure I will hear about you again and that pretty soon I will blog about Dr. Gizowski.

Painful Pain Paper

There has been much hype over the new paper published in the latest Nature issue which claims to have discovered an opioid analgesic that doesn’t have most of the side effects of morphine. If the claim holds, the authors may have found the Holy Grail of pain research chased by too many for too long (besides being worth billions of dollars to its discoverers).

The drug, called PZM21, was discovered using structure-based drug design. This means that instead of taking a drug that works, say morphine, and then tweaking its molecular structure in various ways and see if the resultant drugs work, you take the target of the drug, say mu-opioid receptors, and design a drug that fits in that slot. The search and design are done initially with sophisticated software and there are many millions of virtual candidates. So it takes a lot of work and ingenuity to select but a few drugs that will be synthesized and tested in live animals.

Manglik et al. (2016) did just that and they came up with PZM21 which, compared to morphine, is:

1) selective for the mu-opioid receptors (i.e. it doesn’t bind to anything else)
2) produces no respiratory depression (maybe a touch on the opposite side)
3) doesn’t affect locomotion
4) produces less constipation
5) produces long-lasting affective analgesia
6) and has less addictive liability

The Holy Grail, right? Weeell, I have some serious issues with number 5 and, to some extent, number 6 on this list.

Normally, I wouldn’t dissect a paper so thoroughly because, if there is one thing I learned by the end of GradSchool and PostDoc, is that there is no perfect paper out there. Consequently, anyone with scientific training can find issues with absolutely anything published. I once challenged someone to bring me any loved and cherished paper and I would tear it apart; it’s much easier to criticize than to come up with solutions. Probably that’s why everybody hates Reviewer No. 2…

But, for extraordinary claims, you need extraordinary evidence. And the evidence simply does not support the 5 and maybe 6 above.

Let’s start with pain. The authors used 3 tests: hotplate (drop a mouse on a hot plate for 10 sec and see what it does), tail-flick (give an electric shock to the tail and see how fast the mouse flicks its tail) and formalin (inject an inflammatory painful substance in the mouse paw and see what the animal does). They used 3 doses of PZM21 in the hotplate test (10, 20, and 40 mg/Kg), 2 doses in the tail-flick test (10 and 20), and 1 dose in the formalin test (20). Why? If you start with a dose-response in a test and want to convince me it works in the other tests, then do a dose-response for those too, so I have something to compare. These tests have been extensively used in pain research and the standard drug used is morphine. Therefore, the literature is clear on how different doses of morphine work in these tests. I need your dose-responses for your new drug to be able to see how it measures up, since you claim it is “more efficacious than morphine”. If you don’t want to convince me there is a dose-response effect, that’s fine too, I’ll frown a little, but it’s your choice. However, then choose a dose and stick with it! Otherwise I cannot compare the behaviors across tests, rendering one or the other test meaningless. If you’re wondering, they used only one dose of morphine in all the tests, except the hotplate, where they used two.

Another thing also related to doses. The authors found something really odd: PZM21 works (meaning produces analgesia) in the hotplate, but not the tail-flick tests. This is truly amazing because no opiate I know of can make such a clear-cut distinction between those two tests. Buuuuut, and here is a big ‘BUT” they did not test their highest dose (40mg/kg) in the tail-flick test! Why? I’ll tell you how, because I am oh sooo familiar with this argument. It goes like this:

Reviewer: Why didn’t you use the same doses in all your 3 pain tests?

Author: The middle and highest doses have similar effects in the hotplate test, ok? So it doesn’t matter which one of these doses I’ll use in the tail-flick test.

Reviewer: Yeah, right, but, you have no proof that the effects of the two doses are indistinguishable because you don’t report any stats on them! Besides, even so, that argument applies only when a) you have ceiling effects (not the case here, your morphine hit it, at any rate) and b) the drug has the expected effects on both tests and thus you have some logical rationale behind it. Which is not the case here, again: your point is that the drug DOESN’T produce analgesia in the tail-flick test and yet you don’t wanna try its HIGHEST dose… REJECT AND RESUBMIT! Awesome drug discovery, by the way!

So how come the paper passed the reviewers?! Perhaps the fact that two of the reviewers are long term publishing co-authors from the same University had something to do with it, you know, same views predisposes them to the same biases and so on… But can you do that? I mean, have reviewers for Nature from the same department for the same paper?

Alrighty then… let’s move on to the stats. Or rather not. Because there aren’t any for the hotplate or tail-flick! Now I know all about the “freedom from the tyranny of p” movement (that is: report only the means, standard errors of mean, and confidence intervals and let the reader judge the data) and about the fact that the average scientist today needs to know 100-fold more stats that his predecessors 20 years ago (although some biologists and chemists seem to be excused from this, things either turn color or not, either are there or not etc.) or about the fact that you cannot get away with only one experiment published these days, but you need a lot of them so you have to do a lot of corrections to your stats so you don’t fall into the Type 1 error. I know all about that, but just like the case with the doses, choose one way or another and stick to it. Because there are ANOVAs ran for the formalin test, the respiration, constipation, locomotion, and conditioned place preference tests, but none for the hotplate or tailflick! I am also aware that to be published in Science or Nature you have to strip your work and wordings to the bare minimum because the insane wordcount limits, but you have free rein in the Supplementals. And I combed through those and there are no stats there either. Nor are there any power analyses… So, what’s going on here? Remember, the authors didn’t test the highest dose on the tail-flick test because – presumably – the highest and intermediary doses have indistinguishable effects, but where is the stats to prove it?

And now the thing that really really bothered me: the claim that PZM21 takes away the affective dimension of pain but not the sensory. Pain is a complex experience that, depending on your favourite pain researcher, has at least 2 dimensions: the sensory (also called ‘reflexive’ because it is the immediate response to the noxious stimulation that makes you retract by reflex the limb from whatever produces the tissue damage) and the affective (also called ‘motivational’ because it makes the pain unpleasant and motivates you to get away from whatever caused it and seek alleviation and recovery). The first aspect of pain, the sensory, is relatively easy to measure, since you look at the limb withdrawal (or tail, in the case of animals with prolonged spinal column). By contrast, the affective aspect is very hard to measure. In humans, you can ask them how unpleasant it is (and even those reports are unreliable), but how do you do it with animals? Well, you go back to humans and see what they do. Humans scream “Ouch!” or swear when they get hurt (so you can measure vocalizations in animals) or humans avoid places in which they got hurt because they remember the unpleasant pain (so you do a test called Conditioned Place Avoidance for animals, although if you got a drug that shows positive results in this test, like morphine, you don’t know if you blocked the memory of unpleasantness or the feeling of unpleasantness itself, but that’s a different can of worms). The authors did not use any of these tests, yet they claim that PZM21 takes away the unpleasantness of pain, i.e. is an affective analgesic!

What they did was this: they looked at the behaviors the animal did on the hotplate and divided them in two categories: reflexive (the lifting of the paw) and affective (the licking of the paw and the jumping). Now, there are several issues with this dichotomy, I’m not even going to go there; I’ll just say that there are prominent pain researchers that will scream from the top of their lungs that the so-called affective behaviors from the hotplate test cannot be indexes of pain affect, because the pain affect requires forebrain structures and yet these behaviors persist in the decerebrated rodent, including the jumping. Anyway, leaving the theoretical debate about what those behaviors they measured really mean aside, there still is the problem of the jumpers: namely, the authors excluded from the analysis the mice who tried to jump out of the hotplate test in the evaluation of the potency of PZM21, but then they left them in when comparing the two types of analgesia because it’s a sign of escaping, an emotionally-valenced behavior! Isn’t this the same test?! Seriously? Why are you using two different groups of mice and leaving the impression that is only one? And oh, yeah, they used only the middle dose for the affective evaluation, when they used all three doses for potency…. And I’m not even gonna ask why they used the highest dose in the formalin test…but only for the normal mice, the knockouts in the same test got the middle dose! So we’re back comparing pears with apples again!

Next (and last, I promise, this rant is way too long already), the non-addictive claim. The authors used the Conditioned Place Paradigm, an old and reliable method to test drug likeability. The idea is that you have a box with 2 chambers, X and Y. Give the animal saline in chamber X and let it stay there for some time. Next day, you give the animal the drug and confine it in chamber Y. Do this a few times and on the test day you let the animal explore both chambers. If it stays more in chamber Y then it liked the drug, much like humans behave by seeking a place in which they felt good and avoiding places in which they felt bad. All well and good, only that is standard practice in this test to counter-balance the days and the chambers! I don’t know about the chambers, because they don’t say, but the days were not counterbalanced. I know, it’s a petty little thing for me to bring that up, but remember the saying about extraordinary claims… so I expect flawless methods. I would have also liked to see a way more convincing test for addictive liability like self-administration, but that will be done later, if the drug holds, I hope. Thankfully, unlike the affective analgesia claims, the authors have been more restrained in their verbiage about addiction, much to their credit (and I have a nasty suspicion as to why).

I do sincerely think the drug shows decent promise as a painkiller. Kudos for discovering it! But, seriously, fellows, the behavioral portion of the paper could use some improvements.

Ok, rant over.

EDIT (Aug 25, 2016): I forgot to mention something, and that is the competing financial interests declared for this paper: some of its authors already filed a provisional patent for PZM21 or are already founders or consultants for Epiodyne (a company that that wants to develop novel analgesics). Normally, that wouldn’t worry me unduly, people are allowed to make a buck from their discoveries (although is billions in this case and we can get into that capitalism-old debate whether is moral to make billions on the suffering of other people, but that’s a different story). Anyway, combine the financial interests with the poor behavioral tests and you get a very shoddy thing indeed.

Reference: Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hübner H, Huang XP, Sassano MF, Giguère PM, Löber S, Da Duan, Scherrer G, Kobilka BK, Gmeiner P, Roth BL, & Shoichet BK (Epub 17 Aug 2016). Structure-based discovery of opioid analgesics with reduced side effects. Nature, 1-6. PMID: 27533032, DOI: 10.1038/nature19112. ARTICLE 

By Neuronicus, 21 August 2016

The FIRSTS: Theory of Mind in non-humans (1978)

Although any farmer or pet owner throughout the ages would probably agree that animals can understand the intentions of their owners, not until 1978 has this knowledge been scientifically proven.

Premack & Woodruff (1978) performed a very simple experiment in which they showed videos to a female adult chimpanzee named Sarah involving humans facing various problems, from simple (can’t reach a banana) to complex (can’t get out of the cage). Then, the chimps were shown pictures of the human with the tool that solved the problem (a stick to reach the banana, a key for the cage) along with pictures where the human was performing actions that were not conducive to solving his predicament. The experimenter left the room while the chimp made her choice. When she did, she rang a bell to summon the experimenter back in the room, who then examined the chimp’s choice and told the chimp whether her choice was right or wrong. Regardless of the choice, the chimp was awarded her favorite food. The chimp’s choices were almost always correct when the actor was its favourite trainer, but not so much when the actor was a person she disliked.

Because “no single experiment can be all things to all objections, but the proper combination of results from [more] experiments could decide the issue nicely” (p. 518), the researchers did some more experiments which were variations of the first one designed to figure out what the chimp was thinking. The authors go on next to discuss their findings at length in the light of two dominant theories of the time, mentalism and behaviorism, ruling in favor of the former.

Of course, the paper has some methodological flaws that would not pass the rigors of today’s reviewers. That’s why it has been replicated multiple times in more refined ways. Nor is the distinction between behaviorism and cognitivism a valid one anymore, things being found out to be, as usual, more complex and intertwined than that. Thirty years later, the consensus was that chimps do indeed have a theory of mind in that they understand intentions of others, but they lack understanding of false beliefs (Call & Tomasello, 2008).

95chimpToM - Copy

References:

1. Premack D & Woodruff G (Dec. 1978). Does the chimpanzee have a theory of mind? The Behavioral and Brain Sciences, 1 (4): 515-526. DOI: 10.1017/S0140525X00076512. ARTICLE

2. Call J & Tomasello M (May 2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5): 187-192. PMID: 18424224 DOI: 10.1016/j.tics.2008.02.010. ARTICLE  | FULLTEXT PDF

By Neuronicus, 20 August 2016

Can you tickle yourself?

As I said before, with so many science outlets out there, it’s hard to find something new and interesting to cover that hasn’t been covered already. Admittedly, sometimes some new paper comes out that is so funny or interesting that I too fall in line with the rest of them and cover it. But, most of the time, I try to bring you something that you won’t find it reported by other science journalists. So, I’m sacrificing the novelty for originality by choosing something from my absolutely huge article folder (about 20 000 papers).

And here is the gem for today, titled enticingly “Why can’t you tickle yourself?”. Blakemore, Wolpert & Frith (2000) review several papers on the subject, including some of their own, and arrive to the conclusion that the reason you can’t tickle yourself is because you expect it. Let me explain: when you do a movement that results in a sensation, you have a pretty accurate expectation of how that’s going to feel. This expectation then dampens the sensation, a process probably evolved to let you focus on more relevant things in the environment that on what you’re doing o yourself (don’t let your mind go all dirty now, ok?).

Mechanistically speaking, it goes like this: when you move your arm to tickle your foot, a copy of the motor command you gave to the arm (the authors call this “efference copy”) goes to a ‘predictor’ region of the brain (the authors believe this is the cerebellum) that generates an expectation (See Fig. 1). Once the movement has been completed, the actual sensation is compared to the expected one. If there is a discrepancy, you get tickled, if not, not so much. But, you might say, even when someone else is going to tickle me I have a pretty good idea what to expect, so where’s the discrepancy? Why do I still get tickled when I expect it? Because you can’t fool your brain that easily. The brain then says; “Alright, alright, we expect tickling. But do tell me this, where is that motor command? Hm? I didn’t get any!” So here is your discrepancy: when someone tickles you, there is the sensation, but no motor command, signals 1 and 2 from the diagram are missing.

93 - Copy
Fig. 1. My take on the tickling mechanism after Blakemore, Wolpert & Frith (2000). Credits. Picture: Sobotta 1909, Diagram: Neuronicus 2016. Data: Blakemore, Wolpert & Frith (2002). Overall: Public Domain

Likewise, when someone tickles you with your own hand, there is an attenuation of sensation, but is not completely disappeared, because there is some registration in the brain regarding the movement of your own arm, even if it was not a motor command initiated by you. So you get tickled just a little bit. The brain is no fool: is aware of who had done what and with whose hands (your dirty mind thought that, I didn’t say it!) .

This mechanism of comparing sensation with movement of self and others appears to be impaired in schizophrenia. So when these patients say that “I hear some voices and I can’t shut them up” or ” My hand moved of its own accord, I had no control over it”, it may be that they are not aware of initiating those movements, the self-monitoring mechanism is all wacky. Supporting this hypothesis, the authors conducted an fMRI experiment (Reference 2) where they showed that that the somatosensory and the anterior cingulate cortices show reduced activation when attempting to self-tickle as opposed to being tickled by the experimenter (please, stop that line of thinking…). Correspondingly, the behavioral portion of the experiment showed that the schizophrenics can tickle themselves. Go figure!

94 - Copy

Reference 1: Blakemore SJ, Wolpert D, & Frith C (3 Aug 2000). Why can’t you tickle yourself? Neuroreport, 11(11):R11-6. PMID: 10943682. ARTICLE FULLTEXT

Reference 2: Blakemore SJ, Smith J, Steel R, Johnstone CE, & Frith CD (Sep 2000, Epub 17 October 2000). The perception of self-produced sensory stimuli in patients with auditory hallucinations and passivity experiences: evidence for a breakdown in self-monitoring. Psychological Medicine, 30(5):1131-1139. PMID: 12027049. ARTICLE

By Neuronicus, 7 August 2016

Transcranial direct current stimulation & cognitive enhancement

There’s so much research out there… So much that some time ago I learned that in science, as probably in other fields too, one has only to choose a side of an argument and then, provided that s/he has some good academic search engines skills and institutional access to journals, get the articles that support that side. Granted, that works for relatively small questions restricted to narrow domains, like “is that brain structure involved in x” or something like that; I doubt you would be able to find any paper that invalidates theories like gravity or central dogma of molecular biology (DNA to RNA to protein).

If you’re a scientist trying to answer a question, you’ll probably comb through some dozens papers and form an opinion of your own after weeding out the papers with small sample sizes, the ones with shoddy methodology or simply the bad ones (yes, they do exists, even scientists are people and hence prone to mistakes). And if you’re not a scientist or the question you’re trying to find an answer for is not from your field, then you’ll probably go for reviews or meta-analyses.

Meta-analyses are studies that look at several papers (dozens or hundreds), pool their data together and then apply some complicated statistics to see the overall results. One such meta-analysis concerns the benefits, if any, of transcranial direct current stimulation (tDCS) on working memory (WM) in healthy people.

tDCS is a method of applying electrical current through some electrodes to your neurons to change how they work and thus changing some brain functions. It is similar with repetitive transcranial magnetic stimulation (rTMs), only in the latter case the change in neuronal activity is due to the application of a magnetic field.

Some people look at these methods not only as possible treatment for a variety of disorders, but also as cognitive enhancement tools. And not only by researchers, but also by various companies who sell the relatively inexpensive equipment to gamers and others. But does tDCS work in the first place?

92 conf - Copy (2)

Mancuso et al. (2016) say that there have been 3 recent meta-analyses done on this issue and they found that “the effects [of tDCS on working memory in healthy volunteers] are reliable though small (Hill et al., 2016), partial (Brunoni & Vanderhasselt, 2014), or nonexistent (Horvath et al., 2015)” (p. 2). But they say these studies are somewhat flawed and that’s why they conducted their own meta-analysis, which concludes that “the true enhancement potential of tDCS for WM remains somewhat uncertain” (p.19). Maybe it works a little bit if used during the training phase of a working memory task, like n-back, and even then that’s a maybe…

Boring, you may say. I’ll grant you that. So… all that work and it revealed virtually nothing new! I’ll grant you that too. But what this meta-analysis brings new, besides adding some interesting statistics, like controlling for publication bias, is a nice discussion as to why they didn’t find nothing much, exploring possible causes, like the small sample and effects sizes, which seem to plague many behavioral studies. Another explanation which, to tell you the truth, the authors do not seem to be too enamored with is that, maybe, just maybe, simply, tDCS doesn’t have any effect on working memory, period.

Besides, papers with seemingly boring findings do not catch the media eye, so I had to give it a little attention, didn’t I 😉 ?

Reference: Mancuso LE, Ilieva IP, Hamilton RH, & Farah MJ. (Epub 7 Apr 2016, Aug 2016) Does Transcranial Direct Current Stimulation Improve Healthy Working Memory?: A Meta-analytic Review. Journal of Cognitive Neuroscience, 28(8):1063-89. PMID: 27054400, DOI: 10.1162/jocn_a_00956. ARTICLE

 By Neuronicus, 2 August 2016