Another puzzle piece in the autism mystery

Just like in the case of schizophrenia, hundreds of genes have been associated with autistic spectrum disorders (ASDs). Here is another candidate.

97autism - Copy

Féron et al. (2016) reasoned that most of the info we have about the genes that are behaving badly in ASDs comes from studies that used adult cells. Because ASDs are present before or very shortly after birth, they figured that looking for genetic abnormalities in cells that are at the very early stage of ontogenesis might prove to be enlightening. Those cells are stem cells. Of the pluripotent kind. FYI, based on what they can become (a.k.a how potent they are), the stem cells are divided into omipotent, pluripotent, multipotent, oligopotent, and unipotent. So the pluripotents are very ‘potent’ indeed, having the potential of producing a perfect person.

Tongue-twisters aside, the authors’ approach is sensible, albeit non-hypothesis driven. Which means they hadn’t had anything specific in mind when they had started looking for differences in gene expression between the olfactory nasal cells obtained from 11 adult ASDs sufferers and 11 age-matched normal controls. Luckily for them, as transcriptome studies have a tendency to be difficult to replicate, they found the anomalies in the expression of genes that have been already associated with ASD. But, they also found a new one, the MOCOS (MOlybdenum COfactor Sulfurase) gene, which was poorly expressed in ASDs (downregulated, in genetic speak). The enzyme is MOCOS (am I the only one who thinks that MOCOS isolated from nasal cells is too similar to mucus? is the acronym actually a backronym?).

The enzyme is not known to play any role in the nervous system. Therefore, the researchers looked to see where the gene is expressed. Its enzyme could be found all over the brain of both mouse and human. Also, in the intestine, kidneys, and liver. So not much help there.

Next, the authors deleted this gene in a worm, Caenorhabditis elegans, and they found out that the worm’s cells have issues in dealing with oxidative stress (e.g. the toxic effects of free radicals). In addition, their neurons had abnormal synaptic transmission due to problems with vesicular packaging.

Then they managed – with great difficulty – to produce human induced pluripotent cells (iPSCs) in a Petri dish in which the gene MOCOS was partially knocked down. ‘Partially’, because the ‘totally’ did not survive. Which tells us that MOCOS is necessary for survival of iPSCs. The mutant cells had less synaptic buttons than the normal cells, meaning they formed less synapses.

The study, besides identifying a new candidate for diagnosis and treatment, offers some potential explanations for some beguiling data that other studies have brought forth, like the fact that all sorts of neurotransmitter systems seem to be impaired in ADSs, all sorts of brain regions, making very hard to grab the tiger by the tail if the tiger is sprouting a new tail when you look at it, just like the Hydra’s heads. But, discovering a molecule that is involved in an ubiquitous process like synapse formation may provide a way to leave the tiger’s tail(s) alone and focus on the teeth. In the authors’ words:

“As a molecule involved in the formation of dense core vesicles and, further down, neurotransmitter secretion, MOCOS seems to act on the container rather than the content, on the vehicle rather than one of the transported components” (p. 1123).

The knowledge uncovered by this paper makes a very good piece of the ASDs puzzle. Maybe not a corner, but a good edge. Alright, even if it’s not an edge, at least it’s a crucial piece full of details, not one of those sky pieces.

Reference: Féron F, Gepner B, Lacassagne E, Stephan D, Mesnage B, Blanchard MP, Boulanger N, Tardif C, Devèze A, Rousseau S, Suzuki K, Izpisua Belmonte JC, Khrestchatisky M, Nivet E, & Erard-Garcia M (Sep 2016, Epub 4 Aug 2016). Olfactory stem cells reveal MOCOS as a new player in autism spectrum disorders. Molecular Psychiatry, 21(9):1215-1224. PMID: 26239292, DOI: 10.1038/mp.2015.106. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 31 August 2016

Advertisements

Can you tickle yourself?

As I said before, with so many science outlets out there, it’s hard to find something new and interesting to cover that hasn’t been covered already. Admittedly, sometimes some new paper comes out that is so funny or interesting that I too fall in line with the rest of them and cover it. But, most of the time, I try to bring you something that you won’t find it reported by other science journalists. So, I’m sacrificing the novelty for originality by choosing something from my absolutely huge article folder (about 20 000 papers).

And here is the gem for today, titled enticingly “Why can’t you tickle yourself?”. Blakemore, Wolpert & Frith (2000) review several papers on the subject, including some of their own, and arrive to the conclusion that the reason you can’t tickle yourself is because you expect it. Let me explain: when you do a movement that results in a sensation, you have a pretty accurate expectation of how that’s going to feel. This expectation then dampens the sensation, a process probably evolved to let you focus on more relevant things in the environment that on what you’re doing o yourself (don’t let your mind go all dirty now, ok?).

Mechanistically speaking, it goes like this: when you move your arm to tickle your foot, a copy of the motor command you gave to the arm (the authors call this “efference copy”) goes to a ‘predictor’ region of the brain (the authors believe this is the cerebellum) that generates an expectation (See Fig. 1). Once the movement has been completed, the actual sensation is compared to the expected one. If there is a discrepancy, you get tickled, if not, not so much. But, you might say, even when someone else is going to tickle me I have a pretty good idea what to expect, so where’s the discrepancy? Why do I still get tickled when I expect it? Because you can’t fool your brain that easily. The brain then says; “Alright, alright, we expect tickling. But do tell me this, where is that motor command? Hm? I didn’t get any!” So here is your discrepancy: when someone tickles you, there is the sensation, but no motor command, signals 1 and 2 from the diagram are missing.

93 - Copy
Fig. 1. My take on the tickling mechanism after Blakemore, Wolpert & Frith (2000). Credits. Picture: Sobotta 1909, Diagram: Neuronicus 2016. Data: Blakemore, Wolpert & Frith (2002). Overall: Public Domain

Likewise, when someone tickles you with your own hand, there is an attenuation of sensation, but is not completely disappeared, because there is some registration in the brain regarding the movement of your own arm, even if it was not a motor command initiated by you. So you get tickled just a little bit. The brain is no fool: is aware of who had done what and with whose hands (your dirty mind thought that, I didn’t say it!) .

This mechanism of comparing sensation with movement of self and others appears to be impaired in schizophrenia. So when these patients say that “I hear some voices and I can’t shut them up” or ” My hand moved of its own accord, I had no control over it”, it may be that they are not aware of initiating those movements, the self-monitoring mechanism is all wacky. Supporting this hypothesis, the authors conducted an fMRI experiment (Reference 2) where they showed that that the somatosensory and the anterior cingulate cortices show reduced activation when attempting to self-tickle as opposed to being tickled by the experimenter (please, stop that line of thinking…). Correspondingly, the behavioral portion of the experiment showed that the schizophrenics can tickle themselves. Go figure!

94 - Copy

Reference 1: Blakemore SJ, Wolpert D, & Frith C (3 Aug 2000). Why can’t you tickle yourself? Neuroreport, 11(11):R11-6. PMID: 10943682. ARTICLE FULLTEXT

Reference 2: Blakemore SJ, Smith J, Steel R, Johnstone CE, & Frith CD (Sep 2000, Epub 17 October 2000). The perception of self-produced sensory stimuli in patients with auditory hallucinations and passivity experiences: evidence for a breakdown in self-monitoring. Psychological Medicine, 30(5):1131-1139. PMID: 12027049. ARTICLE

By Neuronicus, 7 August 2016

Transcranial direct current stimulation & cognitive enhancement

There’s so much research out there… So much that some time ago I learned that in science, as probably in other fields too, one has only to choose a side of an argument and then, provided that s/he has some good academic search engines skills and institutional access to journals, get the articles that support that side. Granted, that works for relatively small questions restricted to narrow domains, like “is that brain structure involved in x” or something like that; I doubt you would be able to find any paper that invalidates theories like gravity or central dogma of molecular biology (DNA to RNA to protein).

If you’re a scientist trying to answer a question, you’ll probably comb through some dozens papers and form an opinion of your own after weeding out the papers with small sample sizes, the ones with shoddy methodology or simply the bad ones (yes, they do exists, even scientists are people and hence prone to mistakes). And if you’re not a scientist or the question you’re trying to find an answer for is not from your field, then you’ll probably go for reviews or meta-analyses.

Meta-analyses are studies that look at several papers (dozens or hundreds), pool their data together and then apply some complicated statistics to see the overall results. One such meta-analysis concerns the benefits, if any, of transcranial direct current stimulation (tDCS) on working memory (WM) in healthy people.

tDCS is a method of applying electrical current through some electrodes to your neurons to change how they work and thus changing some brain functions. It is similar with repetitive transcranial magnetic stimulation (rTMs), only in the latter case the change in neuronal activity is due to the application of a magnetic field.

Some people look at these methods not only as possible treatment for a variety of disorders, but also as cognitive enhancement tools. And not only by researchers, but also by various companies who sell the relatively inexpensive equipment to gamers and others. But does tDCS work in the first place?

92 conf - Copy (2)

Mancuso et al. (2016) say that there have been 3 recent meta-analyses done on this issue and they found that “the effects [of tDCS on working memory in healthy volunteers] are reliable though small (Hill et al., 2016), partial (Brunoni & Vanderhasselt, 2014), or nonexistent (Horvath et al., 2015)” (p. 2). But they say these studies are somewhat flawed and that’s why they conducted their own meta-analysis, which concludes that “the true enhancement potential of tDCS for WM remains somewhat uncertain” (p.19). Maybe it works a little bit if used during the training phase of a working memory task, like n-back, and even then that’s a maybe…

Boring, you may say. I’ll grant you that. So… all that work and it revealed virtually nothing new! I’ll grant you that too. But what this meta-analysis brings new, besides adding some interesting statistics, like controlling for publication bias, is a nice discussion as to why they didn’t find nothing much, exploring possible causes, like the small sample and effects sizes, which seem to plague many behavioral studies. Another explanation which, to tell you the truth, the authors do not seem to be too enamored with is that, maybe, just maybe, simply, tDCS doesn’t have any effect on working memory, period.

Besides, papers with seemingly boring findings do not catch the media eye, so I had to give it a little attention, didn’t I 😉 ?

Reference: Mancuso LE, Ilieva IP, Hamilton RH, & Farah MJ. (Epub 7 Apr 2016, Aug 2016) Does Transcranial Direct Current Stimulation Improve Healthy Working Memory?: A Meta-analytic Review. Journal of Cognitive Neuroscience, 28(8):1063-89. PMID: 27054400, DOI: 10.1162/jocn_a_00956. ARTICLE

 By Neuronicus, 2 August 2016

One parent’s gene better than the other’s

Not all people with the same bad genetic makeup that predisposes them to a particular disease go and develop that disease or, at any rate, not with the same severity and prognosis. The question is why? After all, they have the same genes…

Here comes a study that answers that very important question. Eloy et al. (2016) looked at the most common pediatric eye cancer (1 in 15,000) called retinoblastoma (Rb). In the hereditary form of this cancer, the disease occurs if the child carries mutant (i.e. bad) copies of the RB1 tumour suppressor gene located on chromosome 13 (13q14). These copies, called alleles, are inherited by the child from the mother or from the father. But some children with this genetic disadvantage do not develop Rb. They should, so why not?

The authors studied 57 families with Rb history. They took blood and tumour samples from the participants and then did a bunch of genetic tests: DNA, RNA, and methylation analyses.

They found out that when the RB1 gene is inherited from the mother, the child has only 9.7% chances of developing Rb, but when the gene is inherited from the father the child has only 67.5% chances of developing Rb.

The mechanism for this different outcomes may reside in the differential methylation of the gene. Methylation is a chemical process that suppresses the expression of a gene, meaning that less protein is produced from that gene. The maternal gene had less methylation, meaning that more protein was produced, which was able to offer some protection against the cancer. Seems counter-intuitive, you’d think less bad protein is a good thing, but there is a long and complicated explanation for that, which, in a very simplified form, posits that other events influence the function of the resultant protein.

Again, epigenetics seem to offer explanations for pesky genetic inheritance questions. Epigenetic processes, like DNA methylation, are modalities through which traits can be inherited that are not coded in the DNA itself.

RB - Copy

Reference: Eloy P, Dehainault C, Sefta M, Aerts I, Doz F, Cassoux N, Lumbroso le Rouic L, Stoppa-Lyonnet D, Radvanyi F, Millot GA, Gauthier-Villars M, & Houdayer C (29 Feb 2016). A Parent-of-Origin Effect Impacts the Phenotype in Low Penetrance Retinoblastoma Families Segregating the c.1981C>T/p.Arg661Trp Mutation of RB1. PLoS Genetics, 12(2):e1005888. eCollection 2016. PMID: 26925970, PMCID: PMC4771840, DOI: 10.1371/journal.pgen.1005888. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 24 July 2016

Intracranial recordings in human orbitofrontal cortex

81 ofc - CopyHow is reward processed in the brain has been of great interest to neuroscience because of the relevance of pleasure (or lack of it) to a plethora of disorders, from addiction to depression. Among the cortical areas (that is the surface of the brain), the most involved structure in reward processing is the orbitofrontal cortex (OFC). Most of the knowledge about the human OFC comes from patients with lesions or from imaging studies. Now, for the first time, we have insights about how and when the OFC processes reward from a group of scientists that studied it up close and personal, by recording directly from those neurons in the living, awake, behaving human.

Li et al. (2016) gained access to six patients who had implanted electrodes to monitor their brain activity before they went into surgery for epilepsy. All patients’ epilepsy foci were elsewhere in the brain, so the authors figured the overall function of OFC is relatively intact.

While recording directly form the OFC the patients performed a probabilistic monetary reward task: on a screen, 5 visually different slot machine appeared and each machine had a different probability of winning 20 Euros (0% chances, 25%, 50%, 75% and 100%), fact that has not been told to the patients. The patients were asked to press a button if a particular slot machine is more likely to give money. Then they would use the slot machine and the outcome (win 20 or 0 Euros) would appear on the screen. The patients figured out quickly which slot machine is which, meaning they ‘guessed’ correctly the probability of being rewarded or not after only 1 to 4 trails (generally, learning is defined in behavioral studies as > 80% correct responses). The researchers also timed the patients during every part of the task.

Not surprisingly, the subjects spent more time deciding whether or not the 50% chance of winning slot machine was a winner or not than in all other 4 possibilities. In other words, the more riskier the choice, the slower the time reaction to make that choice.

The design of the task allowed the researchers to observe three 3 phases which were linked with 3 different signals in the OFC:

1) the expected value phase where the subjects saw the slot machine and made their judgement. The corresponding signal showed an increase in the neurons’ firing about 400 ms after the slot machine appeared on the screen in moth medial and lateral OFC.

2) the risk or uncertainty phase, when subjects where waiting for the slot machine to stop its spinners and show whether they won or not (1000-1500 ms). They called the risk phase because both medial and lateral OFC had the higher responses when there was presented the riskiest probability, i.e. 50% chance. Unexpectedly, the OFC did not distinguish between the winning and the non-wining outcomes at this phase.

3) the experienced value or outcome phase when the subjects found out whether they won or not. Only the lateral OFC responded during this phase, that is immediately upon finding if the action was rewarded or not.

For the professional interested in precise anatomy, the article provides a nicely detailed diagram with the locations of the electrodes in Fig. 6.

The paper is also covered for the neuroscientists’ interest (that is, is full of scientific jargon) by Kringelbach in the same Journal, a prominent neuroscientist mostly known for his work in affective neuroscience and OFC. One of the reasons I also covered this paper is that both its full text and Kringelbach’s commentary are behind a paywall, so I am giving you a preview of the paper in case you don’t have access to it.

Reference: Li Y, Vanni-Mercier G, Isnard J, Mauguière F & Dreher J-C (1 Apr 2016, Epub 25 Jan 2016). The neural dynamics of reward value and risk coding in the human orbitofrontal cortex. Brain, 139(4):1295-1309. DOI: http://dx.doi.org/10.1093/brain/awv409. Article

By Neuronicus, 25 March 2016

Younger children in a grade are more likely to be diagnosed with ADHD

AHDH immaturity - Copy.jpgA few weeks ago I was drawing attention to the fact that some children diagnosed with ADHD do not have attention deficits. Instead, a natural propensity for seeking more stimulation may have led to overdiagnosing and overmedicating these kids.

Another reason for the dramatic increase in ADHD diagnosis over the past couple of decades may stem in the increasingly age-inappropriate demands that we place on children. Namely, children in the same grade can be as much as 1 year apart in chronological age, but at these young ages 1 year means quite a lot in terms of cognitive and behavioral development. So if we put a standard of expectations based on how the older children behave, then the younger children in the same grade would fall short of these standards simply because they are too immature to live up to them.

So what does the data say? Two studies, Morrow et al. (2012) and Chen et al. (2016) checked to see if the younger children in a given grade are more likely to be diagnosed with ADHD and/or medicated. The first study was conducted in almost 1 million Canadian children, aged 6-12 years and the second investigated almost 400,000 Taiwanese children, aged 4-17 years.

In Canada, the cut-off for starting school in Dec. 31. Which means that in the first grade, a child born in January is almost a year older that a child born in December. Morrow et al. (2012) concluded that the children born in December were significantly more likely to receive a diagnosis of ADHD than those born in January (30% more likely for boys and 70% for girls). Moreover, the children born in December were more likely to be given an ADHD medication prescription (41% more likely for boys and 77% for girls).

In Taiwan, the cut-off date for starting school in August 31. Similar to the Canadian study, Chen et al. (2016) found that the children born in August were more likely to be diagnosed with ADHD and receive ADHD medication than the children born in September.

Now let’s be clear on one thing: ADHD is no trivial matter. It is a real disorder. It’s an incredibly debilitating disease for both children and their parents. Impulsivity, inattention and hyperactivity are the hallmarks of almost every activity the child engages in, leading to very poor school performance (the majority cannot get a college degree) and hard family life, plus a lifetime of stigma that brings its own “gifts” such as marginalization, loneliness, depression, anxiety, poor eating habits, etc.

The data presented above favors the “immaturity hypothesis” which posits that the behaviors expected out of some children cannot be performed not because something is wrong with them, but because they are simply too immature to be able to perform those behaviors. That does not mean that every child diagnosed with ADHD will just grow out of it; the researchers just point to the fact that ignoring the chronological age of the child coupled with prematurely entering a highly stressful and demanding system as school might lead to ADHD overdiagnosis.

Bottom line: ignoring the chronological age of the child might explain some of increase in prevalence of ADHD by overdiagnostication (in US alone, the rise is from 6% of children diagnosed with ADHD in 2000 to 11-15% in 2015).

References:

  1. Morrow RL, Garland EJ, Wright JM, Maclure M, Taylor S, & Dormuth CR. (17 Apr 2012, Epub 5 Mar 2012). Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children. Canadian Medical Association Journal, 184 (7), 755-762, doi: 10.1503/cmaj.111619. Article | FREE PDF 
  1. Chen M-H, Lan W-H, Bai Y-M, Huang K-L, Su T-P, Tsai S-J, Li C-T, Lin W-C, Chang W-H, & Pan T-L, Chen T-J, & Hsu J-W. (10 Mar 2016). Influence of Relative Age on Diagnosis and Treatment of Attention-Deficit Hyperactivity Disorder in Taiwanese Children. The Journal of Pediatrics [Epub ahead print]. DOI: http://dx.doi.org/10.1016/j.jpeds.2016.02.012 Article | FREE PDF

By Neuronicus, 14 March 2016

Not all children diagnosed with ADHD have attention deficits

ADHD

Given the alarming increase in the diagnosis of attention deficit/hyperactivity disorder (ADHD) over the last 20 years, I thought pertinent to feature today an older paper, from the year 2000.

Dopamine, one of the chemicals that the neurons use to communicate, has been heavily implicated in ADHD. So heavily in fact that Ritalin, the main drug used for the treatment of ADHD, has its main effects by boosting the amount of dopamine in the brain.

Swanson et al. (2000) reasoned that people with a particular genetic abnormality that makes their dopamine receptors work less optimally may have more chances to have ADHD. The specialist reader may want to know that the genetic abnormality in question refers to a 7-repeat allele of a 48-bp variable number of tandem repeats in exon 3 of the dopamine receptor number 4 located on chromosome 11, whose expression results in a weaker dopamine receptor. We’ll call it DRD4,7-present as opposed to DRD4,7-absent (i.e. people without this genetic abnormality).

They had access to 96 children diagnosed with ADHD after the diagnostic criteria of DSM-IV and 48 matched controls (children of the same gender, age, school affiliation, socio-economic status etc. but without ADHD). About half of the children diagnosed with ADHD had the DRD4,7-present.

The authors tested the children on 3 tasks:

(i) a color-word task to probe the executive function network linked to anterior cingulate brain regions and to conflict resolution;
(ii) a cued-detection task to probe the orienting and alerting networks linked to posterior parietal and frontal brain regions and to shifting and maintenance of attention; and
(iii) a go-change task to probe the alerting network (and the ability to initiate a series of rapid response in a choice reaction time task), as well as the executive network (and the ability to inhibit a response and re-engage to make another response) (p. 4756).

Invalidating the authors’ hypothesis, the results showed that the controls and the DRD4,7-present had similar performance at these tasks, in contrast to the DRD4,7-absent who showed “clear abnormalities in performance on these neuropsychological tests of attention” (p. 4757).

This means two things:
1) Half of the children diagnosed with ADHD did not have an attention deficit.
2) These same children had the DRD4,7-present genetic abnormality, which has been previously linked with novelty seeking and risky behaviors. So it may be just possible that these children do not suffer from ADHD, but “may be easily bored in the absence of highly stimulating conditions, may show delay aversion and choose to avoid waiting, may have a style difference that is adaptive in some situations, and may benefit from high activity levels during childhood” (p. 4758).

Great paper and highly influential. The last author of the article (meaning the chief of the laboratory) is none other that Michael I. Posner, whose attentional networks, models, and tests feature every psychology and neuroscience textbook. If he doesn’t know about attention, then I don’t know who is.

One of the reasons I chose this paper is because it seems to me that a lot of teachers, nurses, social workers, or even pediatricians feel qualified to scare the living life out of parents by suggesting that their unruly child may have ADHD. In deference to most form the above-mentioned professions, the majority of people recognize their limits and tell the concerned parents to have the child tested by a qualified psychologist. And, unfortunately, even that may result in dosing your child with Ritalin needlessly when the child’s propensity toward a sensation-seeking temperament and extravert personality, may instead require a different approach to learning with a higher level of stimulation (after all, the children form the above study had been diagnosed by qualified people using their latest diagnosis manual).

Bottom line: beware of any psychologist or psychiatrist who does not employ a battery of attention tests when diagnosing your child with ADHD.

Reference: Swanson J, Oosterlaan J, Murias M, Schuck S, Flodman P, Spence MA, Wasdell M, Ding Y, Chi HC, Smith M, Mann M, Carlson C, Kennedy JL, Sergeant JA, Leung P, Zhang YP, Sadeh A, Chen C, Whalen CK, Babb KA, Moyzis R, & Posner MI. (25 April 2000). Attention deficit/hyperactivity disorder children with a 7-repeat allele of the dopamine receptor D4 gene have extreme behavior but normal performance on critical neuropsychological tests of attention. Proceedings of the National Academy of Sciences of the United States of America, 97(9):4754-4759. doi: 10.1073/pnas.080070897. Article | FREE PDF

P.S. If you think that “weeell, this research happened 16 years ago, surely something came out of it” then think again. The newer DSM-V’s criteria for diagnosis are likely to cause an increase in the prevalence of diagnosis of ADHD.

By Neuronicus, 26 February 2016

I am blind, but my other personality can see

58depression-388872_960_720This is a truly bizarre report.

A woman named BT suffered an accident when she was 20 years old and she became blind. Thirteen year later she was referred to Bruno Waldvogel (one of the two authors of the paper) for psychotherapy by a psychiatry clinic who diagnosed her with dissociative identity disorder, formerly known as multiple personality disorder.

The cortical blindness diagnosis has been established after extensive ophtalmologic tests in which she appeared blind but not because of damage to the eyes. So, by inference, it had to be damage to the brain. Remarkably (we shall see later why), she had no oculomotor reflexes in response to glare. Moreover, visual evoked potentials (VEP is an EEG in the occipital region) showed no activity in the primary visual area of the brain (V1).

During the four years of psychotherapy, BT showed more than 10 distinct personalities. One of them, a teenage male, started to see words on a magazine and pretty soon could see everything. With the help of hypnotherapeutic techniques, more and more personalities started to see.

“Sighted and blind states could alternate within seconds” (Strasburger & Waldvogel, 2015).

The VEP showed no or very little activity when the blind personality was “on” and showed normal activity when the sighted personality was “on”. Which is extremely curious, because similar studies in people with psychogenic blindness or anesthetized showed intact VEPs.

There are a couple of conclusions from this: 1) BT was misdiagnosed, as is unlikely to be any brain damage because some personalities could see, and 2) Multiple personalities – or dissociate identities, as they are now called – are real in the sense that they can be separated at in biological way.

BEAR_10_04
The visual pathway that mediates conscious visual perception. a) A side view of the human brain with the retinogeniculocortical pathway shown inside (blue). b) A horizontal section through the brain exposing the same pathway.

Fascinating! The next question is, obviously, what’s the mechanism behind this? The authors say that it’s very likely the LGN (the lateral geniculate nucleus of the thalamus) which is the only relay between retina and V1 (see pic). It can be. Surely is possible. Unfortunately, so are other putative mechanisms, as 10% of the neurons in the retina also go to the superior colliculus, and some others go directly to the hypothalamus, completely bypassing the thalamus. Also, because it is impossible to have a precise timing on the switching between personalities, even if you MRI the woman it would be difficult to establish if the switching to blindness mode is the result of a bottom-up or a top-down modulation (i.e. the visual information never reaches V1, it reaches V1 and is suppressed there, or some signal form other brain areas inhibits V1 completely, so is unresponsive when the visual information arrives).

Despite the limitations, I would certainly try to get the woman into an fMRI. C’mon, people, this is an extraordinary subject and if she gave permission for the case study report, surely she would not object to the scanning.

Reference: Strasburger H & Waldvogel B (Epub 15 Oct 2015). Sight and blindness in the same person: Gating in the visual system. PsyCh Journal. doi: 10.1002/pchj.109.  Article | FULLTEXT PDF | Washington Post cover

By Neuronicus, 29 November 2015

The werewolf and his low fibroblast growth factor 13 levels

Petrus Gonsalvus, by anonymous
Petrus Gonsalvus, anonymous painting of the first recorded case of hypertrichosis in 1642. License: PD

Although they are very rare, werewolves do exist. And now the qualifier: werewolves as in people with excessive hair growth all over the body and not the more familiar kind that changes into a wolf every time there is a new moon. The condition is called hypertrichosis and its various forms have been associated with distinct genetic abnormalities.

In a previous report, DeStefano et al. (2013) identified the genetic locus of the X-linked congenital generalized hypertrichosis (CGH), which is a 19-Mb region on Xq24-27 that spans about 82 genes, resulting mainly from insertions from chromosomes 4 and 5. Now, they wanted to see what is the responsible mechanism for the disease. First, they looked at the hair follicles of a man afflicted with CGH that has hair almost all over his body and noticed some structural abnormalities. Then, they analyzed the expression of several genes from the affected region of the chromosome in this man and others with CGH and they observed that only the levels of the Fibroblast Growth Factor 13 (FGF13), a protein found in hair follicles, are much lower in CGH. Then they did some more experiments to establish the crucial role of FGF13 in regulating the follicle growth.

An interesting find of the study is that, at least in the case of hypertrichosis, is not the content of the genomic sequences that were added to chromosome X that matter, but their presence, affecting a gene that is located 1.2 Mb away from the insertion.

Reference: DeStefano GM, Fantauzzo KA, Petukhova L, Kurban M, Tadin-Strapps M, Levy B, Warburton D, Cirulli ET, Han Y, Sun X, Shen Y, Shirazi M, Jobanputra V, Cepeda-Valdes R, Cesar Salas-Alanis J, & Christiano AM ( 7 May 2013, Epub 19 Apr 2013). Position effect on FGF13 associated with X-linked congenital generalized hypertrichosis. Proceedings of the National Academy of Sciences of the U.S.A., 110(19):7790-5. doi: 10.1073/pnas.1216412110. Article | FREE FULLTEXT PDF

By Neuronicus, 17 November 2015

TMS decreases religiosity and ethnocentrism

Medieval knight dressed in an outfit with the Cross of St James of Compostela. From Galicianflag.
Medieval knight dressed in an outfit with the Cross of St. James of Compostela. Image from Galicianflag.

Rituals are anxiolytic; we developed them because they decrease anxiety. So it makes sense that when we feel the most stressed we turn to soothing ritualistic behaviors. Likewise, in times of threat, be it anywhere from war to financial depression, people show a sharp increase in adherence to political or religious ideologies.

Holbrook et al. (2015) used TMS (transcranial magnetic stimulation) to locally downregulate the activity of the posterior medial frontal cortex (which includes the dorsal anterior cingulate cortex and the dorsomedial prefrontal cortex), a portion of the brain the authors have reasons to believe is involved in augmenting the adherence to ideological convictions in times of threat.

They selected 38 U.S. undergraduates who scored similarly on political views (moderate or extremely conservative, the extremely liberals were excluded). Curiously, they did not measure religiosity prior to testing. Then, they submitted the subjects to a group prejudice test designed to increase ethnocentrism (read critique of USA written by an immigrant) and a high-level conflict designated to increase religiosity (reminder of death) while half of them received TMS and the other half received shams.

Under these conditions, the TMS decreased the belief in God and also the negative evaluations of the critical immigrant, compared to the people that received sham TMS.

The paper is, without doubt, interesting, despite the many possible methodological confounds. The authors themselves acknowledged some of the drawbacks in the discussion section, so regard the article as a pilot investigation. It doesn’t even have a picture with the TMS coordinates. Nevertheless, reducing someone’s religiosity and extremism by inactivating a portion of the brain… Sometimes I get afraid of my discipline.

Reference: Holbrook C, Izuma K, Deblieck C, Fessler DM, & Iacoboni M (Epub 4 Sep 2015). Neuromodulation of group prejudice and religious belief. Social Cognitive and Affective Neuroscience. DOI: 10.1093/scan/nsv107. Article | Research Gate full text PDF

By Neuronicus, 3 November 2015