Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.

sparx1-copy

Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

Advertisements

Painful Pain Paper

There has been much hype over the new paper published in the latest Nature issue which claims to have discovered an opioid analgesic that doesn’t have most of the side effects of morphine. If the claim holds, the authors may have found the Holy Grail of pain research chased by too many for too long (besides being worth billions of dollars to its discoverers).

The drug, called PZM21, was discovered using structure-based drug design. This means that instead of taking a drug that works, say morphine, and then tweaking its molecular structure in various ways and see if the resultant drugs work, you take the target of the drug, say mu-opioid receptors, and design a drug that fits in that slot. The search and design are done initially with sophisticated software and there are many millions of virtual candidates. So it takes a lot of work and ingenuity to select but a few drugs that will be synthesized and tested in live animals.

Manglik et al. (2016) did just that and they came up with PZM21 which, compared to morphine, is:

1) selective for the mu-opioid receptors (i.e. it doesn’t bind to anything else)
2) produces no respiratory depression (maybe a touch on the opposite side)
3) doesn’t affect locomotion
4) produces less constipation
5) produces long-lasting affective analgesia
6) and has less addictive liability

The Holy Grail, right? Weeell, I have some serious issues with number 5 and, to some extent, number 6 on this list.

Normally, I wouldn’t dissect a paper so thoroughly because, if there is one thing I learned by the end of GradSchool and PostDoc, is that there is no perfect paper out there. Consequently, anyone with scientific training can find issues with absolutely anything published. I once challenged someone to bring me any loved and cherished paper and I would tear it apart; it’s much easier to criticize than to come up with solutions. Probably that’s why everybody hates Reviewer No. 2…

But, for extraordinary claims, you need extraordinary evidence. And the evidence simply does not support the 5 and maybe 6 above.

Let’s start with pain. The authors used 3 tests: hotplate (drop a mouse on a hot plate for 10 sec and see what it does), tail-flick (give an electric shock to the tail and see how fast the mouse flicks its tail) and formalin (inject an inflammatory painful substance in the mouse paw and see what the animal does). They used 3 doses of PZM21 in the hotplate test (10, 20, and 40 mg/Kg), 2 doses in the tail-flick test (10 and 20), and 1 dose in the formalin test (20). Why? If you start with a dose-response in a test and want to convince me it works in the other tests, then do a dose-response for those too, so I have something to compare. These tests have been extensively used in pain research and the standard drug used is morphine. Therefore, the literature is clear on how different doses of morphine work in these tests. I need your dose-responses for your new drug to be able to see how it measures up, since you claim it is “more efficacious than morphine”. If you don’t want to convince me there is a dose-response effect, that’s fine too, I’ll frown a little, but it’s your choice. However, then choose a dose and stick with it! Otherwise I cannot compare the behaviors across tests, rendering one or the other test meaningless. If you’re wondering, they used only one dose of morphine in all the tests, except the hotplate, where they used two.

Another thing also related to doses. The authors found something really odd: PZM21 works (meaning produces analgesia) in the hotplate, but not the tail-flick tests. This is truly amazing because no opiate I know of can make such a clear-cut distinction between those two tests. Buuuuut, and here is a big ‘BUT” they did not test their highest dose (40mg/kg) in the tail-flick test! Why? I’ll tell you how, because I am oh sooo familiar with this argument. It goes like this:

Reviewer: Why didn’t you use the same doses in all your 3 pain tests?

Author: The middle and highest doses have similar effects in the hotplate test, ok? So it doesn’t matter which one of these doses I’ll use in the tail-flick test.

Reviewer: Yeah, right, but, you have no proof that the effects of the two doses are indistinguishable because you don’t report any stats on them! Besides, even so, that argument applies only when a) you have ceiling effects (not the case here, your morphine hit it, at any rate) and b) the drug has the expected effects on both tests and thus you have some logical rationale behind it. Which is not the case here, again: your point is that the drug DOESN’T produce analgesia in the tail-flick test and yet you don’t wanna try its HIGHEST dose… REJECT AND RESUBMIT! Awesome drug discovery, by the way!

So how come the paper passed the reviewers?! Perhaps the fact that two of the reviewers are long term publishing co-authors from the same University had something to do with it, you know, same views predisposes them to the same biases and so on… But can you do that? I mean, have reviewers for Nature from the same department for the same paper?

Alrighty then… let’s move on to the stats. Or rather not. Because there aren’t any for the hotplate or tail-flick! Now I know all about the “freedom from the tyranny of p” movement (that is: report only the means, standard errors of mean, and confidence intervals and let the reader judge the data) and about the fact that the average scientist today needs to know 100-fold more stats that his predecessors 20 years ago (although some biologists and chemists seem to be excused from this, things either turn color or not, either are there or not etc.) or about the fact that you cannot get away with only one experiment published these days, but you need a lot of them so you have to do a lot of corrections to your stats so you don’t fall into the Type 1 error. I know all about that, but just like the case with the doses, choose one way or another and stick to it. Because there are ANOVAs ran for the formalin test, the respiration, constipation, locomotion, and conditioned place preference tests, but none for the hotplate or tailflick! I am also aware that to be published in Science or Nature you have to strip your work and wordings to the bare minimum because the insane wordcount limits, but you have free rein in the Supplementals. And I combed through those and there are no stats there either. Nor are there any power analyses… So, what’s going on here? Remember, the authors didn’t test the highest dose on the tail-flick test because – presumably – the highest and intermediary doses have indistinguishable effects, but where is the stats to prove it?

And now the thing that really really bothered me: the claim that PZM21 takes away the affective dimension of pain but not the sensory. Pain is a complex experience that, depending on your favourite pain researcher, has at least 2 dimensions: the sensory (also called ‘reflexive’ because it is the immediate response to the noxious stimulation that makes you retract by reflex the limb from whatever produces the tissue damage) and the affective (also called ‘motivational’ because it makes the pain unpleasant and motivates you to get away from whatever caused it and seek alleviation and recovery). The first aspect of pain, the sensory, is relatively easy to measure, since you look at the limb withdrawal (or tail, in the case of animals with prolonged spinal column). By contrast, the affective aspect is very hard to measure. In humans, you can ask them how unpleasant it is (and even those reports are unreliable), but how do you do it with animals? Well, you go back to humans and see what they do. Humans scream “Ouch!” or swear when they get hurt (so you can measure vocalizations in animals) or humans avoid places in which they got hurt because they remember the unpleasant pain (so you do a test called Conditioned Place Avoidance for animals, although if you got a drug that shows positive results in this test, like morphine, you don’t know if you blocked the memory of unpleasantness or the feeling of unpleasantness itself, but that’s a different can of worms). The authors did not use any of these tests, yet they claim that PZM21 takes away the unpleasantness of pain, i.e. is an affective analgesic!

What they did was this: they looked at the behaviors the animal did on the hotplate and divided them in two categories: reflexive (the lifting of the paw) and affective (the licking of the paw and the jumping). Now, there are several issues with this dichotomy, I’m not even going to go there; I’ll just say that there are prominent pain researchers that will scream from the top of their lungs that the so-called affective behaviors from the hotplate test cannot be indexes of pain affect, because the pain affect requires forebrain structures and yet these behaviors persist in the decerebrated rodent, including the jumping. Anyway, leaving the theoretical debate about what those behaviors they measured really mean aside, there still is the problem of the jumpers: namely, the authors excluded from the analysis the mice who tried to jump out of the hotplate test in the evaluation of the potency of PZM21, but then they left them in when comparing the two types of analgesia because it’s a sign of escaping, an emotionally-valenced behavior! Isn’t this the same test?! Seriously? Why are you using two different groups of mice and leaving the impression that is only one? And oh, yeah, they used only the middle dose for the affective evaluation, when they used all three doses for potency…. And I’m not even gonna ask why they used the highest dose in the formalin test…but only for the normal mice, the knockouts in the same test got the middle dose! So we’re back comparing pears with apples again!

Next (and last, I promise, this rant is way too long already), the non-addictive claim. The authors used the Conditioned Place Paradigm, an old and reliable method to test drug likeability. The idea is that you have a box with 2 chambers, X and Y. Give the animal saline in chamber X and let it stay there for some time. Next day, you give the animal the drug and confine it in chamber Y. Do this a few times and on the test day you let the animal explore both chambers. If it stays more in chamber Y then it liked the drug, much like humans behave by seeking a place in which they felt good and avoiding places in which they felt bad. All well and good, only that is standard practice in this test to counter-balance the days and the chambers! I don’t know about the chambers, because they don’t say, but the days were not counterbalanced. I know, it’s a petty little thing for me to bring that up, but remember the saying about extraordinary claims… so I expect flawless methods. I would have also liked to see a way more convincing test for addictive liability like self-administration, but that will be done later, if the drug holds, I hope. Thankfully, unlike the affective analgesia claims, the authors have been more restrained in their verbiage about addiction, much to their credit (and I have a nasty suspicion as to why).

I do sincerely think the drug shows decent promise as a painkiller. Kudos for discovering it! But, seriously, fellows, the behavioral portion of the paper could use some improvements.

Ok, rant over.

EDIT (Aug 25, 2016): I forgot to mention something, and that is the competing financial interests declared for this paper: some of its authors already filed a provisional patent for PZM21 or are already founders or consultants for Epiodyne (a company that that wants to develop novel analgesics). Normally, that wouldn’t worry me unduly, people are allowed to make a buck from their discoveries (although is billions in this case and we can get into that capitalism-old debate whether is moral to make billions on the suffering of other people, but that’s a different story). Anyway, combine the financial interests with the poor behavioral tests and you get a very shoddy thing indeed.

Reference: Manglik A, Lin H, Aryal DK, McCorvy JD, Dengler D, Corder G, Levit A, Kling RC, Bernat V, Hübner H, Huang XP, Sassano MF, Giguère PM, Löber S, Da Duan, Scherrer G, Kobilka BK, Gmeiner P, Roth BL, & Shoichet BK (Epub 17 Aug 2016). Structure-based discovery of opioid analgesics with reduced side effects. Nature, 1-6. PMID: 27533032, DOI: 10.1038/nature19112. ARTICLE 

By Neuronicus, 21 August 2016

Intracranial recordings in human orbitofrontal cortex

81 ofc - CopyHow is reward processed in the brain has been of great interest to neuroscience because of the relevance of pleasure (or lack of it) to a plethora of disorders, from addiction to depression. Among the cortical areas (that is the surface of the brain), the most involved structure in reward processing is the orbitofrontal cortex (OFC). Most of the knowledge about the human OFC comes from patients with lesions or from imaging studies. Now, for the first time, we have insights about how and when the OFC processes reward from a group of scientists that studied it up close and personal, by recording directly from those neurons in the living, awake, behaving human.

Li et al. (2016) gained access to six patients who had implanted electrodes to monitor their brain activity before they went into surgery for epilepsy. All patients’ epilepsy foci were elsewhere in the brain, so the authors figured the overall function of OFC is relatively intact.

While recording directly form the OFC the patients performed a probabilistic monetary reward task: on a screen, 5 visually different slot machine appeared and each machine had a different probability of winning 20 Euros (0% chances, 25%, 50%, 75% and 100%), fact that has not been told to the patients. The patients were asked to press a button if a particular slot machine is more likely to give money. Then they would use the slot machine and the outcome (win 20 or 0 Euros) would appear on the screen. The patients figured out quickly which slot machine is which, meaning they ‘guessed’ correctly the probability of being rewarded or not after only 1 to 4 trails (generally, learning is defined in behavioral studies as > 80% correct responses). The researchers also timed the patients during every part of the task.

Not surprisingly, the subjects spent more time deciding whether or not the 50% chance of winning slot machine was a winner or not than in all other 4 possibilities. In other words, the more riskier the choice, the slower the time reaction to make that choice.

The design of the task allowed the researchers to observe three 3 phases which were linked with 3 different signals in the OFC:

1) the expected value phase where the subjects saw the slot machine and made their judgement. The corresponding signal showed an increase in the neurons’ firing about 400 ms after the slot machine appeared on the screen in moth medial and lateral OFC.

2) the risk or uncertainty phase, when subjects where waiting for the slot machine to stop its spinners and show whether they won or not (1000-1500 ms). They called the risk phase because both medial and lateral OFC had the higher responses when there was presented the riskiest probability, i.e. 50% chance. Unexpectedly, the OFC did not distinguish between the winning and the non-wining outcomes at this phase.

3) the experienced value or outcome phase when the subjects found out whether they won or not. Only the lateral OFC responded during this phase, that is immediately upon finding if the action was rewarded or not.

For the professional interested in precise anatomy, the article provides a nicely detailed diagram with the locations of the electrodes in Fig. 6.

The paper is also covered for the neuroscientists’ interest (that is, is full of scientific jargon) by Kringelbach in the same Journal, a prominent neuroscientist mostly known for his work in affective neuroscience and OFC. One of the reasons I also covered this paper is that both its full text and Kringelbach’s commentary are behind a paywall, so I am giving you a preview of the paper in case you don’t have access to it.

Reference: Li Y, Vanni-Mercier G, Isnard J, Mauguière F & Dreher J-C (1 Apr 2016, Epub 25 Jan 2016). The neural dynamics of reward value and risk coding in the human orbitofrontal cortex. Brain, 139(4):1295-1309. DOI: http://dx.doi.org/10.1093/brain/awv409. Article

By Neuronicus, 25 March 2016

Stressed out? Blame your amygdalae

amygdala
Clipart: Royalty free from http://www.cliparthut.com. Text: Neuronicus.

Sooner or later, everyone is exposed to high amounts of stress, whether it is in the form losing someone dear, financial insecurity, or health problems and so on. Most of us manage to bounce right up and continue with our lives, but there is a considerable segment of the population who do not and develop all sorts of problems, from autoimmune disorders to severe depression and anxiety. What makes those people more susceptible to stress? And, more importantly, can we do something about it (yeah, besides making the world a less stressful place)?

Swartz et al. (2015) scanned the brain of 753 healthy young adults (18-22 yrs) while performing a widely used paradigm that elicits amygdalar activation (brain structure, see pic): the subjects had to match a face appearing in the upper part of the screen with one of the faces in the lower part of the screen. The faces looked fearful, angry, surprised, or neutral and amygdalae are robustly activated when matching the fearful face. Then the authors had the participants fill out questionnaires regarding their life events and perceived stress level every 3 months over a period of 2 years (they say 4 years everywhere else in the paper minus Methods & Results, which are the sections that count if one wants to replicate; maybe this is only half of the study and they intend to follow-up to 4 years?).

The higher your baseline amygdalar activation, the higher the risk to develop anxiety disorders later on if expossed to life stressors. Yellow = amygdala. Photo credit: https://www.youtube.com/watch?v=JD44PbAOTy8, presumably copyrighted to Duke University.
The higher your baseline amygdalar activation, the higher the risk to develop anxiety disorders later on if expossed to life stressors. Yellow = amygdala. Photo credit: https://www.youtube.com/watch?v=JD44PbAOTy8, presumably copyrighted to Duke University.

The finding of the study is this: baseline amygdalar activation can predict who will develop anxiety later on. In other words, if your natural, healthy, non-stressed self has a an overactive amygdala, you will develop some anxiety disorder later on if exposed to stressors (and who isn’t?). The good news is that knowing this, the owner of the super-sensitive amygdalae, even if s/he may not be able to protect her/himself from stressors, at least can engage in some preventative therapy or counseling to be better equipped with adaptive coping mechanisms when the bad things come. Probably we could all benefit from being “better equipped with adaptive coping mechanisms”, feisty amygdalae or not. Oh, well…

Reference: Swartz, J.R., Knodt, A.R., Radtke, S.R., & Hariri, A.R. (2015). A neural biomarker of psychological vulnerability to future life stress. Neuron, 85, 505-511. doi: 10.1016/j.neuron.2014.12.055. Article | PDF | Video

By Neuronicus, 12 October 2015