Only the climate change scientists are interested in evidence. The rest is politics

Satellite image of clouds created by the exhaust of ship smokestacks (2005). Credit: NASA. License: PD.
Satellite image of clouds created by the exhaust of ship smokestacks (2005). Credit: NASA. License: PD.

Medimorec & Pennycook (2015) analyzed the language used in two prominent reports regarding climate change. Climate change is not a subject of scientific debate anymore, but of political discourse. Nevertheless, it appears that there are a few scientists that are skeptical about the climate change. As part of a conservative think tank, they formed the “Nongovernmental International Panel on Climate Change (NIPCC) as an alternative to the Intergovernmental Panel on Climate Change (IPCC). In 2013, the NIPCC authored Climate Change Reconsidered II: Physical Science (hereafter referred to as ‘NIPCC’; Idso et al. 2013), a scientific report that is a direct response to IPCC’s Working Group 1: The Physical Science Basis (hereafter referred to as ‘IPCC’; Stocker et al. 2013), also published in 2013″ (Medimorec & Pennycook, 2015) .

The authors are not climate scientists, but psychologists armed with nothing but 3 text analysis tools: Coh-Metrix text analyzer, Linguistic Inquiry and Word Count, and AntConc 3.3.5 concordancer analysis toolkit). They do not even fully understand the two very lengthy and highly technical papers; as they put it,

it is very unlikely that non-experts (present authors included) would have the requisite knowledge to be able to distinguish the NIPCC and IPCC reports based on the validity of their scientific arguments“.

So, they proceed on counting nouns, verbs, adverbs, and the like. The results: IPCC used more formal language, more nouns, more abstract words, more infrequent words, more complex syntax, and a lot more tentative language (‘possible’, ‘probable’, ‘might’) than the NIPCC. Which is ironic, since the climate scientists proponents are the ones accused of alarmism and trumpeting catastrophes. On the contrary, their language was much more refrained, perhaps out of fear of controversy, or just as likely, because they are scientists and very afraid to put their reputations at stake by risking type 1 errors.

In the authors’ words (I know, I am citing them 3 times in 4 paragraphs, but I really enjoyed their eloquence),

“the IPCC authors used more conservative (i.e., more cautious, less explicit) language to present their claims compared to the authors of the NIPCC report […]. The language style used by climate change skeptics suggests that the arguments put forth by these groups warrant skepticism in that they are relatively less focused upon the propagation of evidence and more intent on discrediting the opposing perspective”.

And this comes just from text analysis…

Reference: Medimorec, S. & Pennycook, G. (Epub 30 August 2015). The language of denial: text analysis reveals differences in language use between climate change proponents and skeptics. Climatic Change, doi:10.1007/s10584-015-1475-2. Article | Research Gate full text PDF

By Neuronicus, 4 November 2015

Are you in love with an animal?

Sugar Candy Hearts by Petr Kratochvil. License: PD
Sugar Candy Hearts by Petr Kratochvil taken from publicdomainpictures. License: PD

Ren et al. (2015) gave sweet drink (Fanta), sweet food (Oreos), salty–vinegar food (Lays chips) or water to 422 people and then asked them about their romantic relationship; or, if they didn’t have one, about a hypothetical relationship. For hitched people, the foods or drinks had no effect on the evaluation of their relationship. In contrast, the singles who received sweets were more eager to initiate a relationship with a potential partner and evaluated more favorably a hypothetical relationship (how do you do that? I mean, if it’s hypothetical… why wouldn’t you evaluate it favorably from your singleton perspective?) Anyway, the singles who got sweets tend see things a little more on the rosy side, as opposed to the taken ones.

The rationale for doing this experiment is that metaphors alter our perceptions (fair enough). Given that many terms of endearment include reference to the taste of sweet, like “Honey”, “Sugar” or “Sweetie”, maybe this is not accidental or just a metaphor and, if we manipulate the taste, we manipulate the perception. Wait, what? Now re-read the finding above.

The authors take their results as supporting the view that “metaphorical thinking is one fundamental way of perceiving the world; metaphors facilitate social cognition by applying concrete concepts (e.g., sweet taste) to understand abstract concepts (e.g., love)” (p. 916).

So… I am left with many questions, the first being: if the sweet appelatives in a romantic relationship stem from an extrapolation of the concrete taste of sweet to an abstract concept like love, then, I wonder, what kind of concrete concept is being underlined in the prevalence of “baby” as a term of endearment? Do I dare speculate what the metaphor stands for? Should people who are referred to as “baby” by their partners alert the authorities for a possible pedophile ideation? And what do we do about the non-English cultures (apparently non-Germanic or non-Mandarin too) in which the lovey-dovey terms tend to cluster around various small objects (e.g. tassels), vegetables (e.g. pumpkin), cute onomatopoeics (I am at a loss for transcription here), or baby animals (e.g. chick, kitten, puppy). Believe me, such cultures do exist and are numerous. “Excuse me, officer, I suspect my partner is in love with an animal. Oh, wait, that didn’t come out right…”

Ok, maybe I missed something with this paper, as half-way through I failed to maintain proper focus due to an intruding – and disturbing! – image of a man, a chicken, and a tassel. So take the authors’ words when they say that their study “not only contributes to the literature on metaphorical thinking but also sheds light on an understudied factor that influences relationship initiation, that of taste” (p. 918). Oh, metaphors, how sweetly misleading you are…

Please use the “Comments” section below to share the strangest metaphor used as term of endearment you have ever heard in a romantic relationship.

Reference: Ren D, Tan K, Arriaga XB, & Chan KQ (Nov 2015). Sweet love: The effects of sweet taste experience on romantic perceptions. Journal of Social and Personal Relationships, 32(7): 905 – 921. DOI: 10.1177/0265407514554512. Article | FREE FULLTEXT PDF

By Neuronicus, 21 October 2015

Really? That’s your argument?!

Photo by Collection. Released under FSP Standard License License
Photo by Collection. Released under FSP Standard License

I don’t believe there is a single human being that during an argument has not thought or exclaimed “Really? That’s your argument?” or something along those lines. The saying/attitude is meant to convey the emotional response (often contemptuous) to the identification of the opponent’s argument as weak and unworthy of debate. We seem to be very critical about other people’s reasoning when it does not match our own. On the other hand, we also seem to be a little more indulgent with the strength of our own arguments. This phenomenon has been dubbed “selective laziness”, as one is not so diligent in applying the stringent rules of rational thinking to his/her own line of argumentation.

But what happens when the argument that one so easily dismisses as invalid is one’s own? Trouche et al. (2015) managed to fool 47% (115 individuals) into believing that the arguments for a reasoning choice were their own, when, in point of fact, they were not (see Fig. 1). When asked to evaluate the “other” argument (which was their own), 56% (65 people, 27% of the whole sample) “rejected their own argument, choosing instead to stick to the answer that had been attributed to them. Moreover, these participants (Non-Detectors) were more likely to accept their own argument for the valid than for an invalid answer. These results shows that people are more critical of their own arguments when they think they are someone else’s, since they rejected over half of their own arguments when they thought that they were someone else’s”. (p. 8). I had to do this math on a PostIt, as authors were a little bit… lazy in reporting anything but percentages and no graphs.

Fig. 1 from Trouche et al. (2015). © 2015 Cognitive Science Society, Inc.
Fig. 1 from Trouche et al. (2015). © 2015 Cognitive Science Society, Inc.

The authors replicated their findings to address some limitations of the previous experiment, with similar results. And they provide some speculation about the adaptability of ‘selective laziness’, which, frankly, I think is baloney. Nevertheless, the paper quantifies and provides a way to study this reasoning bias we are all familiar with.

Reference: Trouche E, Johansson P, Hall L, & Mercier H. (9 October 2015). The Selective Laziness of Reasoning. Cognitive Science, 1-15. doi: 10.1111/cogs.12303. [Epub ahead of print]. Article | PDF

By Neuronicus, 15 October 2015

64% of psychology studies from 2008 could not be replicated

Free clipart from
Free clipart from

It’s not everyday that you are told – nay, proven! – that you cannot trust more than half of the published peer-reviewed work in your field. For nitpickers, I am using the word “proven” in its scientific sense, and not the philosophical “well, nothing can be technically really proven, etc…”

In an astonishing feat of collaboration, 270 psychologists from all over the world replicated 100 of the most prominent studies in their field, as published in 2008 in 3 leading journals: Psychological Science (leading journal in all psychology), Journal of Personality and Social Psychology (leading journal in social psychology), and Journal of Experimental Psychology: Learning, Memory, and Cognition (leading journal in cognitive psychology). All this without any formal funding! That’s right, no pay, no money, no grant (there was some philanthropy involved, after all, things cost). Moreover, they invited the original authors to take part in the replication process. Replication is possibly the most important step in any scientific endeavor; without it, you may have an interesting observation, but not a scientific fact. (Yes, I know, the investigation of some weird things that happen only once is still science. But a psychology study does not a Comet Shoemaker–Levy 9 make)

Results: 64% of the studies failed the replication test. Namely, 74% social psychology studies and 50% cognitive psychology studies failed to show significant results as originally published.

What does it mean? That the researchers intentionally faked their results? Not at all. Most likely the effects were very subtle and they were inflated by reporting biases fueled by the academic pressure and the journals’ policy to publish only positive results. Is this a plague that affects only psychology? Again, not at all; be on the lookout for a similar endeavor in cancer research and rumor has it that the preliminary results are equally scary.

There would be more to say, but I will leave you in the eloquent words of the authors themselves (p. aac4716-7):

“Humans desire certainty, and science infrequently provides it. […]. Accumulating evidence is the scientific community’s method of self-correction and is the best available option for achieving that ultimate goal: truth.”

Reference: Open Science Collaboration (28 August 2015). PSYCHOLOGY. Estimating the reproducibility of psychological science. Science, 349(6251):aac4716. doi: 10.1126/science.aac4716. Article | PDF | Science Cover | The Guardian cover | IFLS cover | Decision Science cover

By Neuronicus, 13 October 2015

Stricter gun control laws lower suicide rate

Logos of the National Rifle Association and the Brady foundation, respectively, who have opposite views regarding gun legislation.
Logos of the National Rifle Association and the Brady foundation, respectively, who have opposite views regarding gun legislation.

In U.S.A., more than 50% of the suicides were committed with firearms in 2010 (source: Center for Disease Control – CDC), which is the 10th leading cause of death. Intuitively, you would think that if people who wish to commit suicide do not have access to their desired method of offing themselves, they will find alternatives, right? Wrong.

Anestis et al. (2015) wondered whether passing stricter gun legislature (such as requirements to have a permit to purchase a handgun, a registration of handguns, or/and a license to own a handgun) has any impact in the suicide rates. These three laws have been chosen because these are the only ones tracked by the National Rifle Association (NRA) Institute for Legislative Action and the authors din not want to be accused of being “biased toward the regulation of handguns” (p. e2). They looked at publicly available databases regarding suicide rates and demographics (e.g. CDC) and legislature (statal publications) for 2010. Then they SPSS-ed the hell out of the data, i.e. conducted a lot of statistics.

In a nutshell, the results show that the states with any of these three laws in place had fewer suicide rates. The authors would have looked at more laws, like the waiting time required to purchase the gun (which the authors believe would also influence the suicide rates) but, as they said, they analyzed only what NRA tracks so they are not accused of biases.

Reference: Anestis, M. D., Khazem, L. R., Law, K. C., Houtsma, C., LeTard, R., Moberg, F., Martin, R. (October 2015, Epub 16 Apr 2015). The Association Between State Laws Regulating Handgun Ownership and Statewide Suicide Rates. American Journal of Public Health, 105(10): 2059-2067. doi: 10.2105/AJPH.2014.302465.  Article | Full text PDF via Research Gate

By Neuronicus, 3 October 2015

Choose: God or reason

Photo Credit: Anton Darcy
Photo Credit: Anton Darcy

There are two different ways to problem-solving and decision-making: the intuitive style (fast, requires less cognitive resources and effort, relies heavily on implicit assumptions) and the analytic style (involves effortful reasoning, is more time-consuming, and tends to assess more aspects of a problem).

Pennycook et al. (2012) wanted to find out if the propensity for a particular type of reasoning can be used to predict one’s religiosity. They tested 223 subjects on their cognitive style and religiosity (religious engagement, religious belief, and theistic belief). The tests were in the form of questionnaires.

They found that the more people were willing to do analytic reasoning, the less likely they were to believe in God and other supernatural phenomena (witchcraft, ghosts, etc.). And that is because, the authors argue, the people that are engaging in analytic reasoning do not accept as easily ideas without putting effort into scrutinizing them; if the notions submitted to analyses are found to violate natural laws, then they are rejected. On the other hand, intuitive reasoning is based, partly, on stereotypical assumptions that hinder the application of logical thinking and therefore the intuitive mind is more likely to accept supernatural explanations of the natural world. For example, here is one of the problems used to asses analytical thinking versus stereotypical thinking:

In a study 1000 people were tested. Among the participants there were 995 nurses and 5 doctors.
Jake is a randomly chosen participant of this study. Jake is 34 years old. He lives in a beautiful home in a posh suburb. He is well spoken and very interested in politics. He invests a lot of time in his career. What is most likely?
(a) Jake is a nurse.
(b) Jake is a doctor.

Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)
Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)

First thing that comes to mind, based on stereotypical beliefs about these professions, is that Jake is a doctor, but a simple calculation tells you that there is 99.5% chance for Jake to be a nurse. Answer a) denotes analytical thinking, answer b) denotes stereotypical thinking.

And yet that is not the most striking thing about the results, but that the perception of God changes with the score on analytical thinking (see Fig. 1): the better you scored at analytical thinking the less conformist and more abstract view you’d have about God. The authors replicated their results on 267 additional more people. The findings were still robust and independent of demographic data.

Reference: Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (June 2012, Epub 4 Apr 2012.). Analytic cognitive style predicts religious and paranormal belief. Cognition, 123(3): 335-46. doi: 10.1016/j.cognition.2012.03.003.  Article | PPT | full text PDF via Research Gate

by Neuronicus, 1 October 2015