Play-based or academic-intensive?

preschool - CopyThe title of today’s post wouldn’t make any sense for anybody who isn’t a preschooler’s parent or teacher in the USA. You see, on the west side of the Atlantic there is a debate on whether a play-based curriculum for a preschool is more advantageous than a more academic-based one. Preschool age is 3 to 4 years;  kindergarten starts at 5.

So what does academia even looks like for someone who hasn’t mastered yet the wiping their own behind skill? I’m glad you asked. Roughly, an academic preschool program is one that emphasizes math concepts and early literacy, whereas a play-based program focuses less or not at all on these activities; instead, the children are allowed to play together in big or small groups or separately. The first kind of program has been linked with stronger cognitive benefits, while the latter with nurturing social development. The supporters of one program are accusing the other one of neglecting one or the other aspect of the child’s development, namely cognitive or social.

The paper that I am covering today says that it “does not speak to the wider debate over learning-through-play or the direct instruction of young children. We do directly test whether greater classroom time spent on academic-oriented activities yield gains in both developmental domains” (Fuller et al., 2017, p. 2). I’ll let you be the judge.

Fuller et al. (2017) assessed the cognitive and social benefits of different programs in an impressive cohort of over 6,000 preschoolers. The authors looked at many variables:

  • children who attended any form of preschool and children who stayed home;
  • children who received more (high dosage defined as >20 hours/week) and less preschool education (low dosage defined as <20 hour per week);
  • children who attended academic-oriented preschools (spent at least 3 – 4 times a week on each of the following tasks: letter names, writing, phonics and counting manipulatives) and non-academic preschools.

The authors employed a battery of tests to assess the children’s preliteracy skills, math skills and social emotional status (i.e. the independent variables). And then they conducted a lot of statistical analyses in the true spirit of well-trained psychologists.

The main findings were:

1) “Preschool exposure [of any form] has a significant positive effect on children’s math and preliteracy scores” (p. 6).school-1411719801i38 - Copy

2) The earlier the child entered preschool, the stronger the cognitive benefits.

3) Children attending high-dose academic-oriented preschools displayed greater cognitive proficiencies than all the other children (for the actual numbers, see Table 7, pg. 9).

4) “Academic-oriented preschool yields benefits that persist into the kindergarten year, and at notably higher magnitudes than previously detected” (p. 10).

5) Children attending academic-oriented preschools displayed no social development disadvantages than children that attended low or non-academic preschool programs. Nor did the non-academic oriented preschools show an improvement in social development (except for Latino children).

Now do you think that Fuller et al. (2017) gave you any more information in the debate play vs. academic, given that their “findings show that greater time spent on academic content – focused on oral language, preliteracy skills, and math concepts – contributes to the early learning of the average child at magnitudes higher than previously estimated” (p. 10)? And remember that they did not find any significant social advantages or disadvantages for any type of preschool.

I realize (or hope, rather) that most pre-k teachers are not the Draconian thou-shall-not-play-do-worksheets type, nor are they the let-kids-play-for-three-hours-while-the-adults-gossip-in-a-corner types. Most are probably combining elements of learning-through-play and directed-instruction in their programs. Nevertheless, there are (still) programs and pre-k teachers that clearly state that they employ play-based or academic-based programs, emphasizing the benefits of one while vilifying the other. But – surprise, surprise! – you can do both. And, it turns out, a little academia goes a long way.

122-preschool by Neuronicus2017 - Copy

So, next time you choose a preschool for your kid, go with the data, not what your mommy/daddy gut instinct says and certainly be very wary of preschool officials that, when you ask them for data to support their curriculum choice, tell you that that’s their ‘philosophy’, they don’t need data. Because, boy oh boy, I know what philosophy means and it aint’s that.

By Neuronicus, 12 October 2017

Reference: Fuller B, Bein E, Bridges M, Kim, Y, & Rabe-Hesketh, S. (Sept. 2017). Do academic preschools yield stronger benefits? Cognitive emphasis, dosage, and early learning. Journal of Applied Developmental Psychology, 52: 1-11, doi: 10.1016/j.appdev.2017.05.001. ARTICLE | New York Times cover | Reading Rockets cover (offers a fulltext pdf) | Good cover and interview with the first author on qz.com

Advertisements

Scientists don’t know the risks & benefits of science

If you want to find out how bleach works or what keeps the airplanes in the air or why is the rainbow the same sequence of colors or if it’s dangerous to let your kid play with snails would you ask a scientist or your local priest?

The answer is very straightforward for most of the people. Just that for a portion of the people the straightforwardness is viewed by the other portion as corkscrewedness. Or rather just plain dumb.

Cacciatore et al. (2016) asked about 5 years ago 2806 American adults how much they trust the information provided by religious organizations, university scientists, industry scientists, and science/technology museums. They also asked them about their age, gender, race, socioeconomic status, income as well as about Facebook use, religiosity, ideology, and attention to science-y content.

Almost 40% of the sample described themselves as Evangelical Christians, one of the largest religious group in USA. These people said they trust more their religious organizations then scientists (regardless of who employs these scientists) to tell the truth about the risks and benefits of technologies and their applications.

The data yielded more information, like the fact that younger, richer, liberal, and white people tended to trust scientists more then their counterparts. Finally, Republicans were more likely to report a religious affiliation than Democrats.

I would have thought that everybody would prefer to take advice about science from a scientist. Wow, what am I saying, I just realized what I typed… Of course people are taking health advice from homeopaths all the time, from politicians rather than environment scientists, from alternative medicine quacks than from doctors, from no-college educated than geneticists. From this perspective then, the results of this study are not surprising, just very very sad… I just didn’t think that the gullible people can also be grouped by political affiliations. I though the affliction is attacking both sides of an ideological isle in a democratic manner.

Of course, this is a survey study, therefore a lot more work is needed to properly generalize these results, from expanding the survey sections (beyond the meager 1 or 2 questions per section) to validation and replication. Possibly, even addressing different aspects of science because, for instance, climate change is a much more touchy subject than, say, apoptosis. And replace or get rid of the “Scientists know best what is good for the public” item; seriously, I don’t know any scientist, including me, who would answer yes to that question. Nevertheless, the trend is, like I said, sad.

107-copy

Reference:  Cacciatore MA, Browning N, Scheufele DA, Brossard D, Xenos MA, & Corley EA. (Epub ahead of print 25 Jul 2016). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science. PMID: 27458117, DOI: 10.1177/0963662516661090. ARTICLE | NPR cover

By Neuronicus, 7 December 2016

Save

Save

Earliest memories

I found a rather old-ish paper which attempts to settle a curiosity regarding human memory: how far back can we remember?

MacDonald et al. (2000) got 96 participants to fill a 15-minute questionnaire about their demographics and their earliest memories. The New Zealand subjects were in their early twenties, a third of Maori descent, a third of European descent and the last third of Asian descent.

The Maori had the earliest memories, some of them as early as before they turned 1 year old, though the mean was 2 years and 8 months. Next came the Europeans with the mean of 3 years and a half, followed by the Asians with the mean of 4 and 9 months. Overall, most memories seem to occur between 3 and 4 years. There was no difference in gender except for the Asian group where the females reported much later memories, around 6 years.

The subjects were also required to indicate the source of the memory as being personal recollection, family story or photographs. About 86% reported it as personal recollection. The authors argue that even without the remaining 14% the results looks the same. I personally would have left those 14% out if they really don’t make a difference, it would have made the results much neater.

There are a few caveats that one must keep in mind with this kind of studies, the questionnaire studies. One of them is the inherent veracity problem: you rely on human honesty because there is no way to check the data for truth. The fact that the memory may be true or false would not matter for this study, but whether is a personal recollection or a family story would matter. So take the results at face value. Besides, human memory is extremely easy to manipulate, therefore some participants may actually believe that they ‘remember’ an event when in fact it was learned much later from relatives. I also have very early memories and while one of them I believe was told ad nauseam by family members at every family gathering so many times that I incorporated it as actual recollection, there are a couple that I couldn’t tell you for the life of me whether I remember them truly or they too have been subjected to family re-reminiscing.

Another issue might be the very small sample sizes with sub-groups. The authors divided their participants in many subgroups (whether they spoke English first, whether they were raised mainly by the mother etc.) that some subgroups ended up having 2 or 3 members, which is not enough to make a statistical judgement. Which also leads me to multiple comparisons adjustments, which should be more visible.

So not exactly the best paper ever written. Nevertheless, it’s an interesting paper in that even if it doesn’t really establish (in my opinion) when do most people have their earliest true memories, it does point to cultural differences in individuals’ earliest recollections. The authors speculate that that may be due to the emphasis put on detailed stories about personal experiences told by the mother in the early years in some cultures (here Maori) versus a lack of these stories in other cultures (here Asian).

105-copy

Reference: MacDonald S, Uesiliana K, & Hayne H. (Nov 2000). Cross-cultural and gender differences in childhood amnesia. Memory. 2000 Nov;8(6):365-76. PMID: 11145068, DOI: 10.1080/09658210050156822. ARTICLE | FULLTEXT PDF

By Neuronicus, 28 November 2016

Save

Video games and depression

There’s a lot of talk these days about the harm or benefit of playing video games, a lot of time ignoring the issue of what kind of video games we’re talking about.

Merry et al. (2012) designed a game for helping adolescents with depression. The game is called SPARX (Smart, Positive, Active, Realistic, X-factor thoughts) and is based on the cognitive behavioral therapy (CBT) principles.

CBT has been proven to be more efficacious that other forms of therapy, like psychoanalysis, psychodynamic, transpersonal and so on in treating (or at least alleviating) a variety of mental disorders, from depression to anxiety, form substance abuse to eating disorders. Its aim is to identify maladaptive thoughts (the ‘cognitive’ bit) and behaviors (the ‘behavior’ bit), change those thoughts and behaviors in order to feel better. It is more active and more focused than other therapies, in the sense that during the course of a CBT session, the patient and therapist discuss one problem and tackle it.

SPARX is a simple interactive fantasy game with 7 levels (Cave, Ice, Volcano, Mountain, Swamp, Bridgeland, Canyon) and the purpose is to fight the GNATs (Gloomy Negative Automatic Thoughts) by mastering several techniques, like breathing and progressive relaxation and acquiring skills, like scheduling and problem solving. You can customize your avatar and you get a guide throughout the game that also assess your progress and gives you real-life quests, a. k. a. therapeutic homework. If the player does not show the expected improvements after each level, s/he is directed to seek help from a real-life therapist. Luckily, the researchers also employed the help of true game designers, so the game looks at least half-decent and engaging, not a lame-worst-graphic-ever-bleah sort of thing I was kind of expecting.

To see if their game helps with depression, Merry et al. (2012) enrolled in an intervention program 187 adolescents (aged between 12-19 years) that sought help for depression; half of the subjects played the game for about 4 – 7 weeks, and the other half did traditional CBT with a qualified therapist for the same amount of time.  The patients have been assessed for depression at regular intervals before, during and after the therapy, up to 3 months post therapy. The conclusion?

SPARX “was at least as good as treatment as usual in primary healthcare sites in New Zealand” (p. 8)

Not bad for an RPG! The remission rates were higher for the SPARX group that in treatment as usual group. Also, the majority of participants liked the game and would recommend it. Additionally, SPARX was more effective than CBT for people who were less depressed than the ones who scored higher on the depression scales.

And now, coming back to my intro point, the fact that this game seems to be beneficial does not mean all of them are. There are studies that show that some games have deleterious effects on the developing brain. In the same vein, the fact that some shoddy company sells games that are supposed to boost your brain function (I always wandered which function…) that doesn’t mean they are actually good for you. Without the research to back up the claims, anybody can say anything and it becomes a “Buyer Beware!” game. They may call it cognitive enhancement, memory boosters or some other brainy catch phrase, but without the research to back up the claims, it’s nothing but placebo in the best case scenario. So it gives me hope – and great pleasure – that some real psychologists at a real university are developing a video game and then do the necessary research to validate it as a helping tool before marketing it.

sparx1-copy

Oh, an afterthought: this paper is 4 years old so I wondered what happened in the meantime, is it on the market or what? On the research databases I couldn’t find much, except that it was tested this year on Dutch population with pretty much similar results. But Wikipedia tells us that is was released in 2013 and is free online for New Zealanders! The game’s website says it may become available to other countries as well.

Reference: Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF. (18 Apr 2012). The effectiveness of SPARX, a computerised self help intervention for adolescents seeking help for depression: randomised controlled non-inferiority trial. The British Medical Journal, 344:e2598. doi: 10.1136/bmj.e2598. PMID: 22517917, PMCID: PMC3330131. ARTICLE | FREE FULLTEXT PDF  | Wikipedia page | Watch the authors talk about the game

By Neuronicus, 15 October 2016

The FIRSTS: Theory of Mind in non-humans (1978)

Although any farmer or pet owner throughout the ages would probably agree that animals can understand the intentions of their owners, not until 1978 has this knowledge been scientifically proven.

Premack & Woodruff (1978) performed a very simple experiment in which they showed videos to a female adult chimpanzee named Sarah involving humans facing various problems, from simple (can’t reach a banana) to complex (can’t get out of the cage). Then, the chimps were shown pictures of the human with the tool that solved the problem (a stick to reach the banana, a key for the cage) along with pictures where the human was performing actions that were not conducive to solving his predicament. The experimenter left the room while the chimp made her choice. When she did, she rang a bell to summon the experimenter back in the room, who then examined the chimp’s choice and told the chimp whether her choice was right or wrong. Regardless of the choice, the chimp was awarded her favorite food. The chimp’s choices were almost always correct when the actor was its favourite trainer, but not so much when the actor was a person she disliked.

Because “no single experiment can be all things to all objections, but the proper combination of results from [more] experiments could decide the issue nicely” (p. 518), the researchers did some more experiments which were variations of the first one designed to figure out what the chimp was thinking. The authors go on next to discuss their findings at length in the light of two dominant theories of the time, mentalism and behaviorism, ruling in favor of the former.

Of course, the paper has some methodological flaws that would not pass the rigors of today’s reviewers. That’s why it has been replicated multiple times in more refined ways. Nor is the distinction between behaviorism and cognitivism a valid one anymore, things being found out to be, as usual, more complex and intertwined than that. Thirty years later, the consensus was that chimps do indeed have a theory of mind in that they understand intentions of others, but they lack understanding of false beliefs (Call & Tomasello, 2008).

95chimpToM - Copy

References:

1. Premack D & Woodruff G (Dec. 1978). Does the chimpanzee have a theory of mind? The Behavioral and Brain Sciences, 1 (4): 515-526. DOI: 10.1017/S0140525X00076512. ARTICLE

2. Call J & Tomasello M (May 2008). Does the chimpanzee have a theory of mind? 30 years later. Trends in Cognitive Sciences, 12(5): 187-192. PMID: 18424224 DOI: 10.1016/j.tics.2008.02.010. ARTICLE  | FULLTEXT PDF

By Neuronicus, 20 August 2016

Cats and uncontrollable bursts of rage in humans

angry cat - Copy (2)

That many domestic cats carry the parasite Toxoplasma gondii is no news. Nor is the fact that 30-50% of the global population is infected with it, mainly as a result of contact with cat feces.

The news is that individuals with toxoplasmosis are a lot more likely to have episodes of uncontrollable rage. It was previously known that toxoplasmosis is associated with some psychological disturbances, like personality changes or cognitive impairments. In this new longitudinal study (that means a study that spanned more than a decade) published three days ago, Coccaro et al. (2016) tested 358 adults with or without psychiatric disorders for toxoplasmosis. They also submitted the subjects to a battery of psychological tests for anxiety, impulsivity, aggression, depression, and suicidal behavior.

The results showed that the all the subjects who were infected with T. gondii had higher scores on aggression, regardless of their mental status. Among the people with toxoplasmosis, the aggression scores were highest in the patients previously diagnosed with intermittent explosive disorder, a little lower in patients with non-aggressive psychiatric disorders, and finally lower (but still significantly higher than non-infected people) in healthy people.

The authors are adamant in pointing out that this is a correlational study, therefore no causality direction can be inferred. So don’t kick out you felines just yet. However, as CDC points out, a little more care when changing the cat litter or a little more vigorous washing of the kitchen counters would not hurt anybody and may protect against T. gondii infection.

Reference: Coccaro EF, Lee R, Groer MW, Can A, Coussons-Read M, & Postolache TT (23 march 2016). Toxoplasma gondii Infection: Relationship With Aggression in Psychiatric Subjects. The Journal of Clinical Psychiatry, 77(3): 334-341. doi: 10.4088/JCP.14m09621. Article Abstract | FREE Full Text | The Guardian cover

By Neuronicus, 26 March 2016

Younger children in a grade are more likely to be diagnosed with ADHD

AHDH immaturity - Copy.jpgA few weeks ago I was drawing attention to the fact that some children diagnosed with ADHD do not have attention deficits. Instead, a natural propensity for seeking more stimulation may have led to overdiagnosing and overmedicating these kids.

Another reason for the dramatic increase in ADHD diagnosis over the past couple of decades may stem in the increasingly age-inappropriate demands that we place on children. Namely, children in the same grade can be as much as 1 year apart in chronological age, but at these young ages 1 year means quite a lot in terms of cognitive and behavioral development. So if we put a standard of expectations based on how the older children behave, then the younger children in the same grade would fall short of these standards simply because they are too immature to live up to them.

So what does the data say? Two studies, Morrow et al. (2012) and Chen et al. (2016) checked to see if the younger children in a given grade are more likely to be diagnosed with ADHD and/or medicated. The first study was conducted in almost 1 million Canadian children, aged 6-12 years and the second investigated almost 400,000 Taiwanese children, aged 4-17 years.

In Canada, the cut-off for starting school in Dec. 31. Which means that in the first grade, a child born in January is almost a year older that a child born in December. Morrow et al. (2012) concluded that the children born in December were significantly more likely to receive a diagnosis of ADHD than those born in January (30% more likely for boys and 70% for girls). Moreover, the children born in December were more likely to be given an ADHD medication prescription (41% more likely for boys and 77% for girls).

In Taiwan, the cut-off date for starting school in August 31. Similar to the Canadian study, Chen et al. (2016) found that the children born in August were more likely to be diagnosed with ADHD and receive ADHD medication than the children born in September.

Now let’s be clear on one thing: ADHD is no trivial matter. It is a real disorder. It’s an incredibly debilitating disease for both children and their parents. Impulsivity, inattention and hyperactivity are the hallmarks of almost every activity the child engages in, leading to very poor school performance (the majority cannot get a college degree) and hard family life, plus a lifetime of stigma that brings its own “gifts” such as marginalization, loneliness, depression, anxiety, poor eating habits, etc.

The data presented above favors the “immaturity hypothesis” which posits that the behaviors expected out of some children cannot be performed not because something is wrong with them, but because they are simply too immature to be able to perform those behaviors. That does not mean that every child diagnosed with ADHD will just grow out of it; the researchers just point to the fact that ignoring the chronological age of the child coupled with prematurely entering a highly stressful and demanding system as school might lead to ADHD overdiagnosis.

Bottom line: ignoring the chronological age of the child might explain some of increase in prevalence of ADHD by overdiagnostication (in US alone, the rise is from 6% of children diagnosed with ADHD in 2000 to 11-15% in 2015).

References:

  1. Morrow RL, Garland EJ, Wright JM, Maclure M, Taylor S, & Dormuth CR. (17 Apr 2012, Epub 5 Mar 2012). Influence of relative age on diagnosis and treatment of attention-deficit/hyperactivity disorder in children. Canadian Medical Association Journal, 184 (7), 755-762, doi: 10.1503/cmaj.111619. Article | FREE PDF 
  1. Chen M-H, Lan W-H, Bai Y-M, Huang K-L, Su T-P, Tsai S-J, Li C-T, Lin W-C, Chang W-H, & Pan T-L, Chen T-J, & Hsu J-W. (10 Mar 2016). Influence of Relative Age on Diagnosis and Treatment of Attention-Deficit Hyperactivity Disorder in Taiwanese Children. The Journal of Pediatrics [Epub ahead print]. DOI: http://dx.doi.org/10.1016/j.jpeds.2016.02.012 Article | FREE PDF

By Neuronicus, 14 March 2016

Learning chess can improve math skills

chess - Copy

Twenty-two years ago to the day, on January 30, 1994, Peter Leko became the world’s youngest chess grandmaster, at the age of 14.

A proficiency in chess is often linked with higher intelligence, that is, the more intelligent you are, the more likely to be good at chess. This assumption has roots probably in the observation that chess does not allow for random chance or physical attributes, as most games do. So it follows that of you are good at it, it must be… intelligence, although there are at least an equal number of studies if not more that show that practice has more an impact on your chess ability that your native IQ score.

Personally, as one that always looks askance whenever there is talk about intelligence quotient and intelligence tests, I have serious doubts that any of these papers measured what they claimed they measured. And that is because I find the construct “intelligence” poorly defined and, as a direct consequence, hard to measure.

That being said, Sala et al. (2015) wanted to see if chess practice can enhance mathematical problem-solving abilities in young students. The authors divided 560 pupils (8 to 11 years old) into two groups: one group received chess training for 10-15 hours (1 or 2 hours per week) and an option to use a chess program, while the other group did not participate in any chess activities. The experiment took 3 months.

Both groups were tested before and after training with a mathematical problem-solving test battery and a chess ability test.

“Results show a strong correlation between chess and math scores, and a higher improvement in math in the experimental group compared with the control group. These results foster the hypothesis that even a short-time practice of chess in children can be a useful tool to enhance their mathematical abilities.” (Sala et al. (2015, Abstract).

This is all nice and well, were it not for the fact that their experimental group had significantly more pupils that already knew how to play chess (193 out of 309, 62%) compared to the control group (72 out of 251, 29%). To give credit to the authors, they acknowledge this limitation of the study, but, surprisingly, they do not run their stats without the “I-already-know-chess” subjects….

Nevertheless, even if the robustness and the arguments are a little on the shoddy side, the paper points to a possible fruitful line of research: that of additional tools to improve school performance by incorporating game and playtime into the instructors’ and parents’ teaching arsenal.

Reference: Sala G, Gorini A, & Pravettoni G (23 July 2015). Mathematical Problem-Solving Abilities and Chess. An Experimental Study on Young Pupils. SAGE Open, 1-9. DOI: 10.1177/2158244015596050. Article | FREE PDF

By Neuronicus, 30 January 2016

I am blind, but my other personality can see

58depression-388872_960_720This is a truly bizarre report.

A woman named BT suffered an accident when she was 20 years old and she became blind. Thirteen year later she was referred to Bruno Waldvogel (one of the two authors of the paper) for psychotherapy by a psychiatry clinic who diagnosed her with dissociative identity disorder, formerly known as multiple personality disorder.

The cortical blindness diagnosis has been established after extensive ophtalmologic tests in which she appeared blind but not because of damage to the eyes. So, by inference, it had to be damage to the brain. Remarkably (we shall see later why), she had no oculomotor reflexes in response to glare. Moreover, visual evoked potentials (VEP is an EEG in the occipital region) showed no activity in the primary visual area of the brain (V1).

During the four years of psychotherapy, BT showed more than 10 distinct personalities. One of them, a teenage male, started to see words on a magazine and pretty soon could see everything. With the help of hypnotherapeutic techniques, more and more personalities started to see.

“Sighted and blind states could alternate within seconds” (Strasburger & Waldvogel, 2015).

The VEP showed no or very little activity when the blind personality was “on” and showed normal activity when the sighted personality was “on”. Which is extremely curious, because similar studies in people with psychogenic blindness or anesthetized showed intact VEPs.

There are a couple of conclusions from this: 1) BT was misdiagnosed, as is unlikely to be any brain damage because some personalities could see, and 2) Multiple personalities – or dissociate identities, as they are now called – are real in the sense that they can be separated at in biological way.

BEAR_10_04
The visual pathway that mediates conscious visual perception. a) A side view of the human brain with the retinogeniculocortical pathway shown inside (blue). b) A horizontal section through the brain exposing the same pathway.

Fascinating! The next question is, obviously, what’s the mechanism behind this? The authors say that it’s very likely the LGN (the lateral geniculate nucleus of the thalamus) which is the only relay between retina and V1 (see pic). It can be. Surely is possible. Unfortunately, so are other putative mechanisms, as 10% of the neurons in the retina also go to the superior colliculus, and some others go directly to the hypothalamus, completely bypassing the thalamus. Also, because it is impossible to have a precise timing on the switching between personalities, even if you MRI the woman it would be difficult to establish if the switching to blindness mode is the result of a bottom-up or a top-down modulation (i.e. the visual information never reaches V1, it reaches V1 and is suppressed there, or some signal form other brain areas inhibits V1 completely, so is unresponsive when the visual information arrives).

Despite the limitations, I would certainly try to get the woman into an fMRI. C’mon, people, this is an extraordinary subject and if she gave permission for the case study report, surely she would not object to the scanning.

Reference: Strasburger H & Waldvogel B (Epub 15 Oct 2015). Sight and blindness in the same person: Gating in the visual system. PsyCh Journal. doi: 10.1002/pchj.109.  Article | FULLTEXT PDF | Washington Post cover

By Neuronicus, 29 November 2015

Is religion turning perfectly normal children into selfish, punitive misanthropes? Seems like it.

Screenshot from
Screenshot from “Children of the Corn” (Director: Fritz Kiersch, 1984)

The main argument that religious people have against atheism or agnosticism is that without a guiding deity and a set of behaving rules, how can one trust a non-religious person to behave morally? In other words, there is no incentive for the non-religious to behave in a societally accepted manner. Or so it seemed. Past tense. There has been some evidence showing that, contrary to expectations, non-religious people are less prone to violence and deliver more lenient punishments as compared to religious people. Also, the non-religious show equal charitable behaviors as the religious folks, despite self-reporting of the latter to participate in more charitable acts. But these studies were done with adults, usually with non-ecological tests. Now, a truly first-of-its-kind study finds something even more interesting, that calls into question the fundamental basis of Christianity’s and Islam’s moral justifications.

Decety et al. (2015) administered a test of altruism and a test of moral sensitivity to 1170 children, aged 5-12, from the USA, Canada, Jordan, Turkey, and South Africa. Based on parents’ reports about their household practices, the children had been divided into 280 Christian, 510 Muslim, and 323 Not Religious (the remaining 57 children belonged to other religions, but were not included in the analyses due to lack of statistical power). The altruism test consisted in letting children choose their favorite 10 out of 30 stickers to be theirs to keep, but because there aren’t enough stickers for everybody, the child could give some of her/his stickers to another child, not so fortunate as to play the sticker game (the researcher would give the child privacy while choosing). Altruism was calculated as the number of stickers given to the fictive child. In the moral sensitivity task, children watched 10 videos of a child pushing, shoving etc. another child, either intentionally or accidentally and then the children were asked to rate the meanness of the action and to judge the amount of punishment deserved for each action.

And.. the highlighted results are:

  1. “Family religious identification decreases children’s altruistic behaviors.
  2. Religiousness predicts parent-reported child sensitivity to injustices and empathy.
  3. Children from religious households are harsher in their punitive tendencies.”
Current Biology DOI: (10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd
From Current Biology (DOI: 10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd. NOTE: ns. means non-significant difference.

Parents’ educational level did not predict children’s behavior, but the level of religiosity did: the more religious the household, the less altruistic, more judgmental, and delivering harsher punishments the children were. Also, in stark contrast with the actual results, the religious parents viewed their children as more emphatic and sensitive to injustices as compared to the non-religious parents. This was a linear relationship: the more religious the parents, the higher the self-reports of socially desirable behavior, but the lower the child’s empathy and altruism objective scores.

Childhood is an extraordinarily sensitive period for learning desirable social behavior. So… is religion really turning perfectly normal children into selfish, vengeful misanthropes? What anybody does at home is their business, but maybe we could make a secular schooling paradigm mandatory to level the field (i.e. forbid religion teachings in school)? I’d love to read your comments on this.

Reference: Decety J, Cowell JM, Lee K, Mahasneh R, Malcolm-Smith S, Selcuk B, & Zhou X. (16 Nov 2015, Epub 5 Nov 2015). The Negative Association between Religiousness and Children’s Altruism across the World. Current Biology. DOI: 10.1016/j.cub.2015.09.056. Article | FREE PDF | Science Cover

By Neuronicus, 5 November 2015