Social groups are not random

I re-blog other people’s posts extremely rarely. But this one is worth it. It’s about how groups form based on the amount of information given. And, crucially, how amount of information can change individual behavior and group splits. It relates to political polarization and echo-chambers. Read it.

After you read it, you will understand my following question: I wonder by how much the k would increase in a non-binary environment, say the participants are given 3 colors instead of 2. The authors argue that there is a k threshold after which the amount of information makes no difference any more. But that is because the groups already completed the binary task, therefore more information is useless due to the ceiling effect. Basically, my question is: at which point more information stops making a difference in behavior if there were more choices? Is it a log scale, linear, exponential? Good paper, good cover by CNRS, at scienceblog.

By Neuronicus, 24 February 2021

P.S. I haven’t written in a while due to many reasons, one of which is this pesky WordPress changed the post Editor and frankly I don’t have the time and patience to figure it out right now. But I’ll be back :).

Polite versus compassionate

After reading these two words, my first thought was that you can have a whole range of people, some compassionate but not polite (ahem, here, I hope), polite but not compassionate (we all know somebody like that, usually a family member or coworker), or compassionate and polite (I wish I was one of those) or neither (some Twitter and Facebook comments and profiles come to mind…).

It turns out that it is not the case. As in: usually, people are either one or another. Of course there are exceptions, but the majority of people that seem to score high on one trait, they tend to score low on the other.

Hirsh et al. (2010) gave a few questionnaires to over 600 mostly White Canadians of varying ages. The questionnaires measured personality, morality, and political preferences.

After regression analyses followed by factor analyses, which are statistical tools fancier than your run-of-the-mill correlation, the authors found out that the polite people tend to be politically conservatives, affirming support for the Canadian or U.S. Republican Parties, whereas the compassionate people more readily identified as liberals, i.e. Democrats.

Previous research has shown that political conservatives value order and traditionalism, in-group loyalty, purity, are resistant to change, and that they readily accept inequality. In contrast, political liberals value fairness, equality, compassion, justice, and are open to change. The findings of this study go well with the previous research because compassion relies on the perception of other’s distress, for which we have a better term called empathy. “Politeness, by contrast, appears to reflect the components of Agreeableness that are more closely linked to norm compliance and traditionalism” (p. 656). So it makes sense that people who are Polite value norm compliance and traditionalism and as such they end up being conservatives whereas people who are Compassionate value empathy and equality more than conformity, so they end up being liberals. Importantly, empathy is a strong predictor for prosocial behavior (see Damon W. & Eisenberg N (Eds.) (2006). Prosocial development, in Handbook of Child Psychology: Social, Emotional, and Personality Development, New York, NY, Wiley Pub.).

I want to stress that this paper was published in 2010, so the research was probably conducted a year or two prior to publication date, just in case you were wondering.

159 polite vs compassionate - Copy

REFERENCE: Hirsh JB, DeYoung CG, Xu X, & Peterson JB. (May 2010, Epub 6 Apr 2010). Compassionate liberals and polite conservatives: associations of agreeableness with political ideology and moral values. Personality & Social Psychology Bulletin, 36(5):655-64. doi: 10.1177/0146167210366854, PMID: 20371797, DOI: 10.1177/0146167210366854. ABSTRACT

By Neuronicus, 24 February 2020

Pic of the day: African dogs sneeze to vote

156_dog sneeze - CopyExcerpt from Walker et al. (2017), p. 5:

“We also find an interaction between total sneezes and initiator POA in rallies (table 1) indicating that the number of sneezes required to initiate a collective movement differed according to the dominance of individuals involved in the rally. Specifically, we found that the likelihood of rally success increases with the dominance of the initiator (i.e. for lower POA categories) with lower-ranking initiators requiring more sneezes in the rally for it to be successful (figure 2d). In fact, our raw data and the resultant model showed that rallies never failed when a dominant (POA1) individual initiated and there were at least three sneezes, whereas rallies initiated by lower ranking individuals required a minimum of 10 sneezes to achieve the same level of success. Together these data suggest that wild dogs use a specific vocalization (the sneeze) along with a variable quorum response mechanism in the decision-making process. […]. We found that sneezes, a previously undocumented unvoiced sound in the species, are positively correlated with the likelihood of rally success preceding group movements and may function as a voting mechanism to establish group consensus in an otherwise despotically driven social system.”

REFERENCE: Walker RH, King AJ, McNutt JW, & Jordan NR (6 Sept. 2017). Sneeze to leave: African wild dogs (Lycaon pictus) use variable quorum thresholds facilitated by sneezes in collective decisions. Proceedings of the Royal Society B. Biological Sciences, 284(1862). pii: 20170347. doi: 10.1098/rspb.2017.0347. PMID: 28878054, PMCID: PMC5597819, DOI: 10.1098/rspb.2017.0347 ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 1 August 2019

Is piracy the same as stealing?

Exactly 317 years ago, Captain William Kidd was tried and executed for piracy. Whether or not he was a pirate is debatable but what is not under dispute is that people do like to pirate. Throughout the human history, whenever there was opportunity, there was also theft. Wait…, is theft the same as piracy?

If we talk about Captain “Arrr… me mateys” sailing the high seas under the “Jolly Roger” flag, there is no legal or ethical dispute that piracy is equivalent with theft. But what about today’s digital piracy? Despite what the grieved parties may vociferously advocate, digital piracy is not theft because what is being stolen is a copy of the goodie, not the goodie itself therefore it is an infringement and not an actual theft. That’s from a legal standpoint. Ethically though…

For Eres et al. (2016), theft is theft, whether the object of thievery is tangible or not. So why are people who have no problem pirating information from the internet squeamish when it comes to shoplifting the same item?

First, is it true that people are more likely to steal intangible things than physical objects? A questionnaire involving 127 young adults revealed that yes, people of both genders are more likely to steal intangible items, regardless if they (the items) are cheap or expensive or the company that owned the item is big or small. Older people were less likely to pirate and those who already pirated were more likely to do so in the future.

136 piracy - Copy

In a different experiment, Eres et al. (2016) stuck 35 people in the fMRI and asked them to imagine the tangibility (e.g., CD, Book) or intangibility (e.g., .pdf, .avi) of some items (e.g., book, music, movie, software). Then they asked the participants how they would feel after they would steal or purchase these items.

People were inclined to feel more guilty if the item was illegally obtained, particularly if the object was tangible, proving that, at least from an emotional point of view, stealing and infringement are two different things. An increase in the activation the left lateral orbitofrontal cortex (OFC) was seen when the illegally obtained item was tangible. Lateral OFC is a brain area known for its involvement in evaluating the nature of punishment and displeasurable information. The more sensitive to punishment a person is, the more likely it is to be morally sensitive as well.

Or, as the authors put it, it is more difficult to imagine intangible things vs. physical objects and that “difficulty in representing intangible items leads to less moral sensitivity when stealing these items” (p. 374). Physical items are, well…, more physical, hence, possibly, demanding a more immediate attention, at least evolutionarily speaking.

(Divergent thought. Some studies found that religious people are less socially moral than non-religious. Could that be because for the religious the punishment for a social transgression is non-existent if they repent enough whereas for the non-religious the punishment is immediate and factual?)

136 ofc - Copy

Like most social neuroscience imaging studies, this one lacks ecological validity (i.e., people imagined stealing, they did not actually steal), a lacuna that the authors are gracious enough to admit. Another drawback of imaging studies is the small sample size, which is to blame, the authors believe, for failing to see a correlation between the guilt score and brain activation, which other studies apparently have shown.

A simple, interesting paper providing food for thought not only for the psychologists, but for the law makers and philosophers as well. I do not believe that stealing and infringement are the same. Legally they are not, now we know that emotionally they are not either, so shouldn’t they also be separated morally?

And if so, should we punish people more or less for stealing intangible things? Intuitively, because I too have a left OFC that’s less active when talking about transgressing social norms involving intangible things, I think that punishment for copyright infringement should be less than that for stealing physical objects of equivalent value.

But value…, well, that’s where it gets complicated, isn’t it? Because just as intangible as an .mp3 is the dignity of a fellow human, par example. What price should we put on that? What punishment should we deliver to those robbing human dignity with impunity?

Ah, intangibility… it gets you coming and going.

I got on this thieving intangibles dilemma because I’m re-re-re-re-re-reading Feet of Clay, a Discworld novel by Terry Pratchett and this quote from it stuck in my mind:

“Vimes reached behind the desk and picked up a faded copy of Twurp’s Peerage or, as he personally thought of it, the guide to the criminal classes. You wouldn’t find slum dwellers in these pages, but you would find their landlords. And, while it was regarded as pretty good evidence of criminality to be living in a slum, for some reason owning a whole street of them merely got you invited to the very best social occasions.”

REFERENCE: Eres R, Louis WR, & Molenberghs P (Epub 8 May 2016, Pub Aug 2017). Why do people pirate? A neuroimaging investigation. Social Neuroscience, 12(4):366-378. PMID: 27156807, DOI: 10.1080/17470919.2016.1179671. ARTICLE 

By Neuronicus, 23 May 2018

The FIRSTS: the Dunning–Kruger effect (1999) or the unskilled-and-unaware phenomenon

Much talked about these days in the media, the unskilled-and-unaware phenomenon was mused upon since, as they say, immemorial times, but not actually seriously investigated until the ’80s. The phenomenon refers to the observation that incompetents overestimate their competence whereas the competent tend to underestimate their skill (see Bertrand Russell’s brilliant summary of it).

russell-copy-2

Although the phenomenon has gained popularity under the name of the “Dunning–Kruger effect”, it is my understanding that whereas the phenomenon refers to the above-mentioned observation, the effect refers to the cause of the phenomenon, namely that the exact same skills required to make one proficient in a domain are the same skills that allow one to judge proficiency. In the words of Kruger & Dunning (1999),

“those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it” (p. 1132).

Today’s paper on the Dunning–Kruger effect is the third in the cognitive biases series (the first was on depressive realism and the second on the superiority illusion).

Kruger & Dunning (1999) took a look at incompetence with the eyes of well-trained psychologists. As usual, let’s start by defining the terms so we are on the same page. The authors tell us, albeit in a footnote on p. 1122, that:

1) incompetence is a “matter of degree and not one of absolutes. There is no categorical bright line that separates ‘competent’ individuals from ‘incompetent’ ones. Thus, when we speak of ‘incompetent’ individuals we mean people who are less competent than their peers”.

and 2) The study is on domain-specific incompetents. “We make no claim that they would be incompetent in any other domains, although many a colleague has pulled us aside to tell us a tale of a person they know who is ‘domain-general’ incompetent. Those people may exist, but they are not the focus of this research”.

That being clarified, the authors chose 3 domains where they believe “knowledge, wisdom, or savvy was crucial: humor, logical reasoning, and English grammar” (p.1122). I know that you, just like me, can hardly wait to see how they assessed humor. Hold your horses, we’ll get there.

The subjects were psychology students, the ubiquitous guinea pigs of most psychology studies since the discipline started to be taught in the universities. Some people in the field even declaim with more or less pathos that most psychological findings do not necessarily apply to the general population; instead, they are restricted to the self-selected group of undergrad psych majors. Just as the biologists know far more about the mouse genome and its maladies than about humans’, so do the psychologists know more about the inner workings of the psychology undergrad’s mind than, say, the average stay-at-home mom. But I digress, as usual.

The humor was assessed thusly: students were asked to rate on a scale from 1 to 11 the funniness of 30 jokes. Said jokes were previously rated by 8 professional comedians and that provided the reference scale. “Afterward, participants compared their ‘ability to recognize what’s funny’ with that of the average Cornell student by providing a percentile ranking. In this and in all subsequent studies, we explained that percentile rankings could range from 0 (I’m at the very bottom) to 50 (I’m exactly average) to 99 (I’m at the very top)” (p. 1123). Since the social ability to identify humor may be less rigorously amenable to quantification (despite comedians’ input, which did not achieve a high interrater reliability anyway) the authors chose a task that requires more intellectual muscles. Like logical reasoning, whose test consisted of 20 logical problems taken from a Law School Admission Test. Afterward the students estimated their general logical ability compared to their classmates and their test performance. Finally, another batch of students answered 20 grammar questions taken from the National Teacher Examination preparation guide.

In all three tasks,

  • Everybody thought they were above average, showing the superiority illusion.
  • But the people in the bottom quartile (the lowest 25%) dubbed incompetents (or unskilled), overestimated their abilities the most, by approx. 50%. They were also unaware that, in fact, they scored the lowest.
  • In contrast, people in the top quartile underestimated their competence, but not by the same degree as the bottom quartile, by about 10%-15% (see Fig. 1).

126 Dunning–Kruger effect1 - Copy

I wish the paper showed scatter-plots with a fitted regression line instead of the quartile graphs without error bars. So I can judge the data for myself. I mean everybody thought they are above average? Not a single one out of more than three hundred students thought they are kindda… meah? The authors did not find any gender differences in any experiments.

Next, the authors tested the hypothesis about the unskilled that “the same incompetence that leads them to make wrong choices also deprives them of the savvy necessary to recognize competence, be it their own or anyone else’s” (p. 1126). And they did that by having both the competents and the incompetents see the answers that their peers gave at the tests. Indeed, the incompetents not only failed to recognize competence, but they continued to believe they performed very well in the face of contrary evidence. In contrast, the competents adjusted their ratings after seeing their peer’s performance, so they did not underestimate themselves anymore. In other words, the competents learned from seeing other’s mistakes, but the incompetents did not.

Based on this data, Kruger & Dunning (1999) argue that the incompetents are so because they lack the skills to recognize competence and error in them or others (jargon: lack of metacognitive skills). Whereas the competents overestimate themselves because they assume everybody does as well as they did, but when shown the evidence that other people performed poorly, they become accurate in their self-evaluations (jargon: the false consensus effect, a.k.a the social-projection error).

So, the obvious implication is: if incompetents learn to recognize competence, does that also translate into them becoming more competent? The last experiment in the paper attempted to answer just that. The authors got 70 students to complete a short (10 min) logical reasoning improving session and 70 students did something unrelated for 10 min. The data showed that the trained students not only improved their self-assessments (still showing superiority illusion though), but they also improved their performance. Yeays all around, all is not lost, there is hope left in the world!

This is an extremely easy read. I totally recommend it to non-specialists. Compare Kruger & Dunning (1999) with Pennycook et al. (2017): they both talk about the same subject and they both are redoubtable personages in their fields. But while the former is a pleasant leisurely read, the latter lacks mundane operationalizations and requires serious familiarization with the literature and its jargon.

Since Kruger & Dunning (1999) is under the paywall of the infamous APA website (infamous because they don’t even let you see the abstract and even with institutional access is difficult to extract the papers out of them, as if they own the darn things!), write to me at scientiaportal@gmail.com specifying that you need it for educational purposes and promise not to distribute it for financial gain, and thou shalt have its .pdf. As always. Do not, under any circumstance, use a sci-hub server to obtain this paper illegally! Actually, follow me on Twitter @Neuronicus to find out exactly which servers to avoid.

REFERENCES:

1) Kruger J, & Dunning D. (Dec. 1999). Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Journal of Personality and Social Psychology, 77(6):1121-1134. PMID: 10626367. ARTICLE

2) Russell, B. (1931-1935). “The Triumph of Stupidity” (10 May 1933), p. 28, in Mortals and Others: American Essays, vol. 2, published in 1998 by Routledge, London and New York, ISBN 0415178665. FREE FULLTEXT By GoogleBooks | FREE FULLTEXT of ‘The Triumph of Stupidity”

P.S. I personally liked this example from the paper for illustrating what lack of metacognitive skills means:

“The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter” (p. 1121-1122).

By Neuronicus, 10 January 2018

The superiority illusion

Following up on my promise to cover a few papers about self-deception, the second in the series is about the superiority illusion, another cognitive bias (the first was about depressive realism).

Yamada et al. (2013) sought to uncover the origins of the ubiquitous belief that oneself is “superior to average people along various dimensions, such as intelligence, cognitive ability, and possession of desirable traits” (p. 4363). The sad statistical truth is that MOST people are average; that’s the whole definitions of ‘average’, really… But most people think they are superior to others, a.k.a. the ‘above-average effect’.

Twenty-four young males underwent resting-state fMRI and PET scanning. The first scanner is of the magnetic resonance type and tracks where you have most of the blood going in the brain at any particular moment. More blood flow to a region is interpreted as that region being active at that moment.

The word ‘functional’ means that the subject is performing a task while in the scanner and the resultant brain image is correspondent to what the brain is doing at that particular moment in time. On the other hand, ‘resting-state’ means that the individual did not do any task in the scanner, s/he just sat nice and still on the warm pads listening to the various clicks, clacks, bangs & beeps of the scanner. The subjects were instructed to rest with their eyes open. Good instruction, given than many subjects fall asleep in resting state MRI studies, even in the terrible racket that the coils make that sometimes can reach 125 Db. Let me explain: an MRI is a machine that generates a huge magnetic field (60,000 times stronger than Earth’s!) by shooting rapid pulses of electricity through a coiled wire, called gradient coil. These pulses of electricity or, in other words, the rapid on-off switchings of the electrical current make the gradient coil vibrate very loudly.

A PET scanner functions on a different principle. The subject receives a shot of a radioactive substance (called tracer) and the machine tracks its movement through the subject’s body. In this experiment’s case, the tracer was raclopride, a D2 dopamine receptor antagonist.

The behavioral data (meaning the answers to the questionnaires) showed that, curiously, the superiority illusion belief was not correlated with anxiety or self-esteem scores, but, not curiously, it was negatively correlated with helplessness, a measure of depression. Makes sense, especially from the view of depressive realism.

The imaging data suggests that dopamine binding to its striatal D2 receptors attenuate the functional connectivity between the left sensoriomotor striatum (SMST, a.k.a postcommissural putamen) and the dorsal anterior cingulate cortex (daCC). And this state of affairs gives rise to the superiority illusion (see Fig. 1).

125 superiority - Copy
Fig. 1. The superiority illusion arises from the suppression of the dorsal anterior cingulate cortex (daCC) – putamen functional connection by the dopamine coming from the substantia nigra/ ventral tegmental area complex (SN/VTA) and binding to its D2 striatal receptors. Credits: brain diagram: Wikipedia, other brain structures and connections: Neuronicus, data: Yamada et al. (2013, doi: 10.1073/pnas.1221681110). Overall: Public Domain

This was a frustrating paper. I cannot tell if it has methodological issues or is just poorly written. For instance, I have to assume that the dACC they’re talking about is bilateral and not ipsilateral to their SMST, meaning left. As a non-native English speaker myself I guess I should cut the authors a break for consistently misspelling ‘commissure’ or for other grammatical errors for fear of being accused of hypocrisy, but here you have it: it bugged me. Besides, mine is a blog and theirs is a published peer-reviewed paper. (Full Disclosure: I do get editorial help from native English speakers when I publish for real and, except for a few personal style quirks, I fully incorporate their suggestions). So a little editorial help would have gotten a long way to make the reading more pleasant. What else? Ah, the results are not clearly explained anywhere, it looks like the authors rely on obviousness, a bad move if you want to be understood by people slightly outside your field. From the first figure it looks like only 22 subjects out of 24 showed superiority illusion but the authors included 24 in the imaging analyses, or so it seems. The subjects were 23.5 +/- 4.4 years, meaning that not all subjects had the frontal regions of the brain fully developed: there are clear anatomical and functional differences between a 19 year old and a 27 year old.

I’m not saying it is a bad paper because I have covered bad papers; I’m saying it was frustrating to read it and it took me a while to figure out some things. Honestly, I shouldn’t even have covered it, but I spent some precious time going through it and its supplementals, what with me not being an imaging dude, so I said the hell with it, I’ll finish it; so here you have it :).

By Neuronicus, 13 December 2017

REFERENCE: Yamada M, Uddin LQ, Takahashi H, Kimura Y, Takahata K, Kousa R, Ikoma Y, Eguchi Y, Takano H, Ito H, Higuchi M, Suhara T (12 Mar 2013). Superiority illusion arises from resting-state brain networks modulated by dopamine. Proceedings of the National Academy of Sciences of the United States of America, 110(11):4363-4367. doi: 10.1073/pnas.1221681110. ARTICLE | FREE FULLTEXT PDF 

Another puzzle piece in the autism mystery

Just like in the case of schizophrenia, hundreds of genes have been associated with autistic spectrum disorders (ASDs). Here is another candidate.

97autism - Copy

Féron et al. (2016) reasoned that most of the info we have about the genes that are behaving badly in ASDs comes from studies that used adult cells. Because ASDs are present before or very shortly after birth, they figured that looking for genetic abnormalities in cells that are at the very early stage of ontogenesis might prove to be enlightening. Those cells are stem cells. Of the pluripotent kind. FYI, based on what they can become (a.k.a how potent they are), the stem cells are divided into omipotent, pluripotent, multipotent, oligopotent, and unipotent. So the pluripotents are very ‘potent’ indeed, having the potential of producing a perfect person.

Tongue-twisters aside, the authors’ approach is sensible, albeit non-hypothesis driven. Which means they hadn’t had anything specific in mind when they had started looking for differences in gene expression between the olfactory nasal cells obtained from 11 adult ASDs sufferers and 11 age-matched normal controls. Luckily for them, as transcriptome studies have a tendency to be difficult to replicate, they found the anomalies in the expression of genes that have been already associated with ASD. But, they also found a new one, the MOCOS (MOlybdenum COfactor Sulfurase) gene, which was poorly expressed in ASDs (downregulated, in genetic speak). The enzyme is MOCOS (am I the only one who thinks that MOCOS isolated from nasal cells is too similar to mucus? is the acronym actually a backronym?).

The enzyme is not known to play any role in the nervous system. Therefore, the researchers looked to see where the gene is expressed. Its enzyme could be found all over the brain of both mouse and human. Also, in the intestine, kidneys, and liver. So not much help there.

Next, the authors deleted this gene in a worm, Caenorhabditis elegans, and they found out that the worm’s cells have issues in dealing with oxidative stress (e.g. the toxic effects of free radicals). In addition, their neurons had abnormal synaptic transmission due to problems with vesicular packaging.

Then they managed – with great difficulty – to produce human induced pluripotent cells (iPSCs) in a Petri dish in which the gene MOCOS was partially knocked down. ‘Partially’, because the ‘totally’ did not survive. Which tells us that MOCOS is necessary for survival of iPSCs. The mutant cells had less synaptic buttons than the normal cells, meaning they formed less synapses.

The study, besides identifying a new candidate for diagnosis and treatment, offers some potential explanations for some beguiling data that other studies have brought forth, like the fact that all sorts of neurotransmitter systems seem to be impaired in ADSs, all sorts of brain regions, making very hard to grab the tiger by the tail if the tiger is sprouting a new tail when you look at it, just like the Hydra’s heads. But, discovering a molecule that is involved in an ubiquitous process like synapse formation may provide a way to leave the tiger’s tail(s) alone and focus on the teeth. In the authors’ words:

“As a molecule involved in the formation of dense core vesicles and, further down, neurotransmitter secretion, MOCOS seems to act on the container rather than the content, on the vehicle rather than one of the transported components” (p. 1123).

The knowledge uncovered by this paper makes a very good piece of the ASDs puzzle. Maybe not a corner, but a good edge. Alright, even if it’s not an edge, at least it’s a crucial piece full of details, not one of those sky pieces.

Reference: Féron F, Gepner B, Lacassagne E, Stephan D, Mesnage B, Blanchard MP, Boulanger N, Tardif C, Devèze A, Rousseau S, Suzuki K, Izpisua Belmonte JC, Khrestchatisky M, Nivet E, & Erard-Garcia M (Sep 2016, Epub 4 Aug 2016). Olfactory stem cells reveal MOCOS as a new player in autism spectrum disorders. Molecular Psychiatry, 21(9):1215-1224. PMID: 26239292, DOI: 10.1038/mp.2015.106. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 31 August 2016

The FIRSTS: Theory of Mind in non-humans (1978)

Although any farmer or pet owner throughout the ages would probably agree that animals can understand the intentions of their owners, not until 1978 has this knowledge been scientifically proven.

Premack & Woodruff (1978) performed a very simple experiment in which they showed videos to a female adult chimpanzee named Sarah involving humans facing various problems, from simple (can’t reach a banana) to complex (can’t get out of the cage). Then, the chimps were shown pictures of the human with the tool that solved the problem (a stick to reach the banana, a key for the cage) along with pictures where the human was performing actions that were not conducive to solving his predicament. The experimenter left the room while the chimp made her choice. When she did, she rang a bell to summon the experimenter back in the room, who then examined the chimp’s choice and told the chimp whether her choice was right or wrong. Regardless of the choice, the chimp was awarded her favorite food. The chimp’s choices were almost always correct when the actor was its favourite trainer, but not so much when the actor was a person she disliked.

Because “no single experiment can be all things to all objections, but the proper combination of results from [more] experiments could decide the issue nicely” (p. 518), the researchers did some more experiments which were variations of the first one designed to figure out what the chimp was thinking. The authors go on next to discuss their findings at length in the light of two dominant theories of the time, mentalism and behaviorism, ruling in favor of the former.

Of course, the paper has some methodological flaws that would not pass the rigors of today’s reviewers. That’s why it has been replicated multiple times in more refined ways. Nor is the distinction between behaviorism and cognitivism a valid one anymore, things being found out to be, as usual, more complex and intertwined than that. Thirty years later, the consensus was that chimps do indeed have a theory of mind in that they understand intentions of others, but they lack understanding of false beliefs (Call & Tomasello, 2008).

95chimpToM - Copy

References:

1. Premack D & Woodruff G (Dec. 1978). Does the chimpanzee have a theory of mind? The Behavioral and Brain Sciences, 1 (4): 515-526. DOI: 10.1017/S0140525X00076512. ARTICLE

2. Call J & Tomasello M (May 2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5): 187-192. PMID: 18424224 DOI: 10.1016/j.tics.2008.02.010. ARTICLE  | FULLTEXT PDF

By Neuronicus, 20 August 2016

THE FIRSTS: The Mirror Neurons (1988)

There are some neurons in the human brain that fire both when the person is doing some behavior and when watching that behavior performed by someone else. These cells are called mirror neurons and were first discovered in 1988 (see NOTE) by a group of researchers form the University of Parma, Italy, led by Giacomo Rizzolatti.

The discovery was done by accident. The researchers were investigating the activity of neurons in the rostral part of the inferior premotor cortex (riPM) of macaque monkeys with electrophysiological recordings. They placed a box in front of the monkey which had various objects in it. When the monkey pressed a switch, the content of the box was illuminated, then a door would open and the monkey reached for an object. Under each object was hidden a small piece of food. Several neurons were discharging when the animal was grasping the object. But the researchers noticed that some of these neurons ALSO fired when the monkey was motionless and watching the researcher grasping the objects!

The authors then did more motions to see when exactly the two neurons were firing, whether it’s related to the food or threatening gestures and so on. And then they recorded from some 182 more neurons while the monkey or the experimenter were performing hand actions with different objects. Importantly, they also did an electromyogram (EMG) and saw that when the neurons that were firing when the monkey was observing actions, the muscles did not move at all.

They found that some neurons responded to both when doing and seeing the actions, whereas some other neurons responded only when doing or only when seeing the actions. The neurons that are active when observing are called mirror neurons now. In 1996 they were identified also in humans with the help of positron emission tomography (PET).

88mirror - Copy
In yellow, the frontal region; in red, the parietal region. Credits: Brain diagram by Korbinian Brodmann under PD license; Tracing by Neuronicus under PD license; Area identification and color coding after Rizzolatti & Fabbri-Destro (2010) © Springer-Verlag 2009.

It is tragicomical that the authors first submitted their findings to the most prestigious scientific journal, Nature, believing that their discovery is worth it, and rightfully so. But, Nature rejected their paper because of its “lack of general interest” (Rizzolatti & Fabbri-Destro, 2010)! Luckily for us, the editor of Experimental Brain Research, Otto Creutzfeld, did not share Nature‘s opinion.

Thousands of experiments followed the tremendous discovery of mirror neurons, even trying to manipulate their activity. Many researchers believe that the activity of the mirror neurons is fundamental for understanding the intentions of others, the development of theory of mind, empathy, the process of socialization, language development and even human self-awareness.

NOTE: Whenever possible, I try to report both the date of the discovery and the date of publication. Sometimes, the two dates can differ quite a bit. In this case, the discovery was done in 1988 and the publishing in 1992.

 References:

  1. di Pellegrino G, Fadiga L, Fogassi L, Gallese V, & Rizzolatti G (October 1992). Understanding motor events: a neurophysiological study. Experimental Brain Research, 91(1):176-180. DOI: 10.1007/BF00230027. ARTICLE  | Research Gate FULLTEXT PDF
  2. Rizzolatti G & Fabbri-Destro M (Epub 18 Sept 2009; January 2010). Mirror neurons: From discovery to autism. Experimental Brain Research, 200(3): 223-237. DOI: 10.1007/s00221-009-2002-3. ARTICLE  | Research Gate FULLTEXT PDF 

By Neuronicus, 15 July 2016

Mu suppression and the mirror neurons

A few decades ago, Italian researchers from the University of Parma discovered some neurons in monkey which were active not only when the monkey is performing an action, but also when watching the same action performed by someone else. This kind of neuron, or rather this particular neuronal behavior, had been subsequently identified in humans scattered mainly within the frontal and parietal cortices (front and top of your head) and called the mirror neuron system (MNS). Its role is to understand the intentions of others and thus facilitate learning. Mind you, there are, as it should be in any healthy vigorous scientific endeavor, those who challenge this role and even the existence of MNS.

Hobson & Bishop (2016) do not question the existence of the mirror neurons or their roles, but something else. You see, proper understanding of intentions, actions and emotions of others is severely impaired in autism or some schizophrenias. Correspondingly, there have been reports saying that the MNS function is abnormal in these disorders. So if we can manipulate the neurons that help us understanding others, then we may be able to study the neurons better, and – who knows? – maybe even ‘switch them on’ and ‘off’ when needed (Ha! That’s a scary thought!).

EEG WIKI
Human EEG waves (from Wikipedia, under CC BY-SA 3.0 license)

Anyway, previous work said that recording a weak Mu frequency in the brain regions with mirror neurons show that these neurons are active. This frequency (between 8-13 Hz) is recorded through electroencephalography (EEG). The assumption is as follows: when resting, neurons fire synchronously; when busy, they fire each to its own, so they desynchronize, which leads to a reduction in the Mu intensity.

All well and good, but there is a problem. There is another frequency that overlaps with the Mu frequency and that is the Alpha band. Alpha activity is highest when a person is awake with eyes closed, but diminishes when the person is drowsy or, importantly, when making a mental effort, like paying great attention to something. So, if I see a weak Mu/Alpha frequency when the subject is watching someone grabbing a pencil, is that because the mirror neurons are active or because he’s sleepy? There are a few gimmicks to disentangle between the two, from the setup of the experiment in such a way that it requires same attention demand over tasks to the careful localization of the origin of the two waves (Mu is said to arise from sensoriomotor regions, whereas Alpha comes from more posterior regions).

But Hobson & Bishop (2016) argue that this disentangling is more difficult than previously thought by carrying out a series of experiments where they varied the baseline, in such a way that some were more attentionally demanding than others. After carefully analyzing various EEG waves and electrodes positions in these conditions, they conclude that “mu suppression can be used to index the human MNS, but the effect is weak and unreliable and easily confounded with alpha suppression“.

87mu - Copy

What makes this paper interesting to me, besides its empirical findings, is the way the experiment was conducted and published. This is a true hypothesis driven study, following the scientific method step by step, a credit to us all scientists. In other words, a rare gem.  A lot of other papers are trying to make a pretty story from crappy data or weave some story about the results as if that’s what they went for all along when in fact they did a bunch of stuff and chose what looked good on paper.

Let me explain. As a consequence of the incredible pressure put on researchers to publish or perish (which, believe me, is more than just a metaphor, your livelihood and career depend on it), there is an alarming increase in bad papers, which means

  • papers with inappropriate statistical analyses (p threshold curse, lack of multiple comparisons corrections, like the one brilliantly exposed here),
  • papers with huge databases in which some correlations are bound to appear by chance alone and are presented as meaningful (p-hacking or data fishing),
  • papers without enough data to make a meaningful conclusion (lack of statistical power),
  • papers that report only good-looking results (only positive results required by journals),
  • papers that seek only to provide data to reinforce previously held beliefs (confirmation bias)
  • and so on.

For these reasons (and more), there is a high rate of rejection of papers submitted to journals (about 90%), which means more than just a lack of publication in a good journal; it means wasted time, money and resources, shattered career prospects for the grad students who did the experiments and threatened job security for everybody involved, not to mention a promotion of distrust of science and a disservice to the scientific endeavor in general. So some journals, like Cortex, are moving toward a system called Registered Report, which asks for the rationale and the plan of the experiment before this is conducted, which should protect against many of the above-mentioned plagues. If the plan is approved, the chances to get the results published in that journal are 90%.

This is one of those Registered Report papers. Good for you, Hobson & Bishop!

REFERENCE: Hobson HM & Bishop DVM (Epub April 2016). Mu suppression – A good measure of the human mirror neuron system?. Cortex, doi: 10.1016/j.cortex.2016.03.019 ARTICLE | FREE FULLTEXT PDF | RAW DATA

By Neuronicus, 14 July 2016

Autism cure by gene therapy

shank3 - Copy

Nothing short of an autism cure is promised by this hot new research paper.

Among many thousands of proteins that a neuron needs to make in order to function properly there is one called SHANK3 made from the gene shank3. (Note the customary writing: by consensus, a gene’s name is written using small caps and italicized, whereas the protein’s name that results from that gene expression is written with caps).

This protein is important for the correct assembly of synapses and previous work has shown that if you delete its gene in mice they show autistic-like behavior. Similarly, some people with autism, but by far not all, have a deletion on Chromosome 22, where the protein’s gene is located.

The straightforward approach would be to restore the protein production into the adult autistic mouse and see what happens. Well, one problem with that is keeping the concentration of the protein at the optimum level, because if the mouse makes too much of it, then the mouse develops ADHD and bipolar.

So the researchers developed a really neat genetic model in which they managed to turn on and off the shank3 gene at will by giving the mouse a drug called tamoxifen (don’t take this drug for autism! Beside the fact that is not going to work because you’re not a genetically engineered mouse with a Cre-dependent genetic switch on your shank3, it is also very toxic and used only in some form of cancers when is believed that the benefits outweigh the horrible side effects).

In young adult mice, the turning on of the gene resulted in normalization of synapses in the striatum, a brain region heavily involved in autistic behaviors. The synapses were comparable to normal synapses in some aspects (from the looks, i.e. postsynaptic density scaffolding, to the works, i.e. electrophysiological properties) and even more so in others (more dendritic spines than normal, meaning more synapses, presumably). This molecular repair has been mirrored by some behavioral rescue: although these mice still had more anxiety and more coordination problems than the control mice, their social aversion and repetitive behaviors disappeared. And the really really cool part of all this is that this reversal of autistic behaviors was done in ADULT mice.

Now, when the researchers turned the gene on in 20 days old mice (which is, roughly, the equivalent of the entering the toddling stage in humans), all four behaviors were rescued: social aversion, repetitive, coordination, and anxiety. Which tells us two things: first, the younger you intervene, the more improvements you get and, second and equally important, in adult, while some circuits seem to be irreversibly developed in a certain way, some other neural pathways are still plastic enough as to be amenable to change.

Awesome, awesome, awesome. Even if only a very small portion of people with autism have this genetic problem (about 1%), even if autism spectrum disorders encompass such a variety of behavioral abnormalities, this research may spark hope for a whole range of targeted gene therapies.

Reference: Mei Y, Monteiro P, Zhou Y, Kim JA, Gao X, Fu Z, Feng G. (Epub 17 Feb 2016). Adult restoration of Shank3 expression rescues selective autistic-like phenotypes. Nature. doi: 10.1038/nature16971. Article | MIT press release

By Neuronicus, 19 February 2016

Save

Is religion turning perfectly normal children into selfish, punitive misanthropes? Seems like it.

Screenshot from
Screenshot from “Children of the Corn” (Director: Fritz Kiersch, 1984)

The main argument that religious people have against atheism or agnosticism is that without a guiding deity and a set of behaving rules, how can one trust a non-religious person to behave morally? In other words, there is no incentive for the non-religious to behave in a societally accepted manner. Or so it seemed. Past tense. There has been some evidence showing that, contrary to expectations, non-religious people are less prone to violence and deliver more lenient punishments as compared to religious people. Also, the non-religious show equal charitable behaviors as the religious folks, despite self-reporting of the latter to participate in more charitable acts. But these studies were done with adults, usually with non-ecological tests. Now, a truly first-of-its-kind study finds something even more interesting, that calls into question the fundamental basis of Christianity’s and Islam’s moral justifications.

Decety et al. (2015) administered a test of altruism and a test of moral sensitivity to 1170 children, aged 5-12, from the USA, Canada, Jordan, Turkey, and South Africa. Based on parents’ reports about their household practices, the children had been divided into 280 Christian, 510 Muslim, and 323 Not Religious (the remaining 57 children belonged to other religions, but were not included in the analyses due to lack of statistical power). The altruism test consisted in letting children choose their favorite 10 out of 30 stickers to be theirs to keep, but because there aren’t enough stickers for everybody, the child could give some of her/his stickers to another child, not so fortunate as to play the sticker game (the researcher would give the child privacy while choosing). Altruism was calculated as the number of stickers given to the fictive child. In the moral sensitivity task, children watched 10 videos of a child pushing, shoving etc. another child, either intentionally or accidentally and then the children were asked to rate the meanness of the action and to judge the amount of punishment deserved for each action.

And.. the highlighted results are:

  1. “Family religious identification decreases children’s altruistic behaviors.
  2. Religiousness predicts parent-reported child sensitivity to injustices and empathy.
  3. Children from religious households are harsher in their punitive tendencies.”

Current Biology DOI: (10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd
From Current Biology (DOI: 10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd. NOTE: ns. means non-significant difference.

Parents’ educational level did not predict children’s behavior, but the level of religiosity did: the more religious the household, the less altruistic, more judgmental, and delivering harsher punishments the children were. Also, in stark contrast with the actual results, the religious parents viewed their children as more emphatic and sensitive to injustices as compared to the non-religious parents. This was a linear relationship: the more religious the parents, the higher the self-reports of socially desirable behavior, but the lower the child’s empathy and altruism objective scores.

Childhood is an extraordinarily sensitive period for learning desirable social behavior. So… is religion really turning perfectly normal children into selfish, vengeful misanthropes? What anybody does at home is their business, but maybe we could make a secular schooling paradigm mandatory to level the field (i.e. forbid religion teachings in school)? I’d love to read your comments on this.

Reference: Decety J, Cowell JM, Lee K, Mahasneh R, Malcolm-Smith S, Selcuk B, & Zhou X. (16 Nov 2015, Epub 5 Nov 2015). The Negative Association between Religiousness and Children’s Altruism across the World. Current Biology. DOI: 10.1016/j.cub.2015.09.056. Article | FREE PDF | Science Cover

By Neuronicus, 5 November 2015

TMS decreases religiosity and ethnocentrism

Medieval knight dressed in an outfit with the Cross of St James of Compostela. From Galicianflag.
Medieval knight dressed in an outfit with the Cross of St. James of Compostela. Image from Galicianflag.

Rituals are anxiolytic; we developed them because they decrease anxiety. So it makes sense that when we feel the most stressed we turn to soothing ritualistic behaviors. Likewise, in times of threat, be it anywhere from war to financial depression, people show a sharp increase in adherence to political or religious ideologies.

Holbrook et al. (2015) used TMS (transcranial magnetic stimulation) to locally downregulate the activity of the posterior medial frontal cortex (which includes the dorsal anterior cingulate cortex and the dorsomedial prefrontal cortex), a portion of the brain the authors have reasons to believe is involved in augmenting the adherence to ideological convictions in times of threat.

They selected 38 U.S. undergraduates who scored similarly on political views (moderate or extremely conservative, the extremely liberals were excluded). Curiously, they did not measure religiosity prior to testing. Then, they submitted the subjects to a group prejudice test designed to increase ethnocentrism (read critique of USA written by an immigrant) and a high-level conflict designated to increase religiosity (reminder of death) while half of them received TMS and the other half received shams.

Under these conditions, the TMS decreased the belief in God and also the negative evaluations of the critical immigrant, compared to the people that received sham TMS.

The paper is, without doubt, interesting, despite the many possible methodological confounds. The authors themselves acknowledged some of the drawbacks in the discussion section, so regard the article as a pilot investigation. It doesn’t even have a picture with the TMS coordinates. Nevertheless, reducing someone’s religiosity and extremism by inactivating a portion of the brain… Sometimes I get afraid of my discipline.

Reference: Holbrook C, Izuma K, Deblieck C, Fessler DM, & Iacoboni M (Epub 4 Sep 2015). Neuromodulation of group prejudice and religious belief. Social Cognitive and Affective Neuroscience. DOI: 10.1093/scan/nsv107. Article | Research Gate full text PDF

By Neuronicus, 3 November 2015

How grateful would you feel after watching a Holocaust documentary? (Before you comment, READ the post first)

form Fox et al. (2015)
form Fox et al. (2015)

How would you feel if one of your favourite scientists published a paper that is, to put it in mild terms, not to their very best? Disappointed? Or perhaps secretly gleeful that even “the big ones” are not always producing pearl after pearl?

This is what happened to me after reading the latest paper of the Damasio group. Fox et al. (2015) decided to look for the neural correlates of gratitude. That is, stick people in fMRI, make them feel grateful, and see what lights up. All well and good, except they decided to go with a second-hand approach, meaning that instead of making people feel grateful (I don’t know how, maybe giving them something?), they made the participants watch stories in which gratitude may have been felt by other people (still not too too bad, maybe watching somebody helping the elderly). But, the researchers made an in-house documentary about the Holocaust and then had several actual Holocaust survivors tell their story (taken from the SC Shoah Foundation Institutes Visual History Archive), focusing on the part where their lives were saved or they were helped by others by giving them survival necessities. Then, the subjects were asked to immerse themselves in the story and tell how grateful they felt if they were the gift recipients.

I don’t know about you, but I don’t think that after watching a documentary about the Holocaust (done with powerfully evocative images and professional actor voice-overs, mind you!) and seeing people tell the horrors they’ve been through and then receiving some food or shelter from a Good Samaritan, gratitude would not have been my first feeling. Anger perhaps? That such an abominable thing as the Holocaust was perpetrated by my fellow humans? Sorrow? Sadness? Sick to my stomach? Compassion for the survivors? Maybe I am blatantly off-Gauss here, but I don’t think Damasio et co. measured what they thought they were measuring.

Anyway, for what is worth, the task produced significant activity in the medial prefrontal cortex (which is involved in so many behaviors that is not even worth listing them), along with the usual suspects in a task as ambiguous as this, like various portions of the anterior cingulate and orbitofrontal cortices.

Reference: Fox GR, Kaplan J, Damasio H, & Damasio A (30 September 2015). Neural correlates of gratitude. Frontiers in Psycholology, 6:1491. doi: 10.3389/fpsyg.2015.01491. Article | FREE FULLTEXT PDF

By Neuronicus, 27 October 2015

Save

It’s what I like or what you like? I don’t know anymore…

The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another's through learning. Modified from Fig. 2B from Garvert et al. (2015)
The plasticity in medial prefrontal cortex (mPFC) underlies the changes in self preferences to match another’s, through learning. Modified from Fig. 2B from Garvert et al. (2015), which is an open access article under the CC BY license.

One obvious consequence of being a social mammal is that each individual wants to be accepted. Nobody likes rejection, be it from a family member, a friend or colleague, a job application, or even a stranger. So we try to mould our beliefs and behaviors to fit the social norms, a process called social conformity. But how does that happen?

Garvert et al. (2015) shed some light on the mechanism(s) underlying the malleability of personal preferences in response to information about other people preferences. Twenty-seven people had 48 chances to make a choice on whether gain a small amount of money now or more money later, with “later” meaning from 1 day to 3 months later. Then the subjects were taught another partner choices, no strings attached, just so they know. Then they were made to chose again. Then they got into the fMRI and there things got complicated, as the subjects had to choose as they themselves would choose, as their partner would choose, or as an unknown person would choose. I skipped a few steps, the procedure is complicated and the paper is full of cumbersome verbiage (e.g. “We designed a contrast that measured the change in repetition suppression between self and novel other from block 1 to block 3, controlled for by the change in repetition suppression between self and familiar other over the same blocks” p. 422).

Anyway, long story short, the behavioral results showed that the subjects tended to alter their preferences to match their partner’s (although not told to do so, it had no impact on their own money gain, there were not time constraints, and sometimes were told that the “partner” was a computer).

These behavioral changes were matched by the changes in the activation pattern of the medial prefrontal cortex (mPFC), in the sense that learning of the preferences of another, which you can imagine as a specific neural pattern in your brain, changes the way your own preferences are encoded in the same neural pattern.

Reference: Garvert MM, Moutoussis M, Kurth-Nelson Z, Behrens TE, & Dolan RJ (21 January 2015). Learning-induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron, 85(2):418-28. doi: 10.1016/j.neuron.2014.12.033. Article + FREE PDF

By Neuronicus, 11 October 2015

Zap the brain to get more lenient judges

Neuronicus3
Reference: Buckholtz et al. (2015). Credit: Neuronicus

Humans manage to live successfully in large societies mainly because we are able to cooperate. Cooperation rests on commonly agreed rules and, equally important, the punishment bestowed upon their violators. Researchers call this norm enforcement, while the rest of us call it simply justice, whether it is delivered in its formal way (through the courts of law) or in a more personal manner (shout at the litterer, claxon the person who cut in your lane etc.). It is a complicate process to investigate, but scientists managed to break it into simpler operations: moral permissibility (what is the rule), causal responsibility (did John break the rule), moral responsibility (did John intend to break the rule, also called blameworthiness or culpability), harm assessment (how much harm resulted from John breaking the rule) and sanction (give the appropriate punishment to John). Different brain parts deal with different aspects of norm enforcement.

The approximate area where the stimulation took place. Note that the picture depicts the left hemisphere, whereas the low punishment judgement occured when the stimulation was mostly on the right hemisphere.
The approximate area where the stimulation took place. Note that the picture depicts the left hemisphere, whereas the low punishment judgement occurred when the stimulation was mostly on the right hemisphere.

Using functional magnetic resonance imaging (fMRI), Buckholtz et al. found out that the dorsolateral prefrontal cortex (DLPFC) gets activated when 60 young subjects decided what punishment fits a crime. Then, they used repetitive transcranial magnetic stimulation (rTMS), which is a non-invasive way to disrupt the activity of the neurons, to see what happens if you inhibit the DLPFC. The subjects made the same judgments when it came to assigning blame or assessing the harm done, but delivered lower punishments.

Reference: Buckholtz, J. W., Martin, J. W., Treadway, M. T., Jan, K., Zald, D.H., Jones, O., & Marois, R. (23 September 2015). From Blame to Punishment: Disrupting Prefrontal Cortex Activity Reveals Norm Enforcement Mechanisms. Neuron, 87: 1–12, http://dx.doi.org/10.1016/j.neuron.2015.08.023. Article + FREE PDF

by Neuronicus, 22 September 2015