The FIRSTS: The roots of depressive realism (1979)

There is a rumor stating that depressed people see the world more realistically and the rest of us are – to put it bluntly – deluded optimists. A friend of mine asked me if this is true. It took me a while to find the origins of this claim, but after I found it and figured out that the literature has a term for the phenomenon (‘depressive realism’), I realized that there is a whole plethora of studies on the subject. So the next following posts will be centered, more or less, on the idea of self-deception.

It was 1979 when Alloy & Abramson published a paper who’s title contained the phrase ‘Sadder but Wiser’, even if it was followed by a question mark. The experiments they conducted are simple, but the theoretical implications are large.

The authors divided several dozens of male and female undergraduate students into a depressed group and a non-depressed group based on their Beck Depression Inventory scores (a widely used and validated questionnaire for self-assessing depression). Each subject “made one of two possible responses (pressing a button or not pressing a button) and received one of two possible outcomes (a green light or no green light)” (p. 447). Various conditions presented the subjects with various degrees of control over what the button does, from 0 to 100%. After the experiments, the subjects were asked to estimate their control over the green light, how many times the light came on regardless of their behavior, what’s the percentage of trials on which the green light came on when they pressed or didn’t press the button, respectively, and how did they feel. In some experiments, the subjects were wining or losing money when the green light came on.

Verbatim, the findings were that:

“Depressed students’ judgments of contingency were surprisingly accurate in all four experiments. Nondepressed students, on the other hand, overestimated the degree of contingency between their responses and outcomes when noncontingent outcomes were frequent and/or desired and underestimated the degree of contingency when contingent outcomes were undesired” (p. 441).

In plain English, it means that if you are not depressed, when you have some control and bad things are happening, you believe you have no control. And when you have no control but good things are happening, then you believe you have control. If you are depressed, it does not matter, you judge your level of control accurately, regardless of the valence of the outcome.

Such illusion of control is a defensive mechanism that surely must have adaptive value by, for example, allowing the non-depressed to bypass a sense of guilt when things don’t work out and increase self-esteem when they do. This is fascinating, particularly since it is corroborated by findings that people receiving gambling wins or life successes like landing a good job, rewards that at least in one case are demonstrably attributable to chance, believe, nonetheless, that it is due to some personal attributes that make them special, that makes them deserving of such rewards. (I don’t remember the reference of this one so don’t quote me on it. If I find it, I’ll post it, it’s something about self-entitlement, I think). That is not to say that life successes are not largely attributable to the individual; they are. But, statistically speaking, there must be some that are due to chance alone, and yet most people feel like they are the direct agents for changes in luck.

Another interesting point is that Alloy & Abramson also tried to figure out how exactly their subjects reasoned when they asserted their level of control through some clever post-experiment questioners. Long story short (the paper is 45 pages long), the illusion of control shown by nondepressed subjects in the no control condition was the result of incorrect logic, that is, faulty reasoning.

In summary, the distilled down version of depressive realism that non-depressed people see the world through rose-colored glasses is slightly incorrect. Because only in particular conditions this illusion of control applies, and that is overestimation of control only when good things are happening and underestimation of control when bad things are happening.

Of course, it has been over 40 years since the publication of this paper and of course it has its flaws. Many replications and replications with caveats and meta-analyses and reviews and opinions and alternative hypotheses have been confirmed and infirmed and then confirmed again with alterations, so there is still a debate out there about the causes/ functions/ ubiquity/ circumstantiality of the depressive realism effect. One thing seems to be constant though: the effect exists.

I will leave you with the ponders of Alloy & Abramson (1979):

“A crucial question is whether depression itself leads people to be “realistic” or whether realistic people are more vulnerable to depression than other people” (p. 480).

124 - Copy

REFERENCE: Alloy LB, & Abramson LY (Dec. 1979). Judgment of contingency in depressed and nondepressed students: sadder but wiser? Journal of Experimental Psychology: General, 108(4): 441-485. PMID: 528910. http://dx.doi.org/10.1037/0096-3445.108.4.441. ARTICLE | FULLTEXT PDF via ResearchGate

By Neuronicus, 30 November 2017

Advertisements

64% of psychology studies from 2008 could not be replicated

Free clipart from www.cliparthut.com
Free clipart from http://www.cliparthut.com

It’s not everyday that you are told – nay, proven! – that you cannot trust more than half of the published peer-reviewed work in your field. For nitpickers, I am using the word “proven” in its scientific sense, and not the philosophical “well, nothing can be technically really proven, etc…”

In an astonishing feat of collaboration, 270 psychologists from all over the world replicated 100 of the most prominent studies in their field, as published in 2008 in 3 leading journals: Psychological Science (leading journal in all psychology), Journal of Personality and Social Psychology (leading journal in social psychology), and Journal of Experimental Psychology: Learning, Memory, and Cognition (leading journal in cognitive psychology). All this without any formal funding! That’s right, no pay, no money, no grant (there was some philanthropy involved, after all, things cost). Moreover, they invited the original authors to take part in the replication process. Replication is possibly the most important step in any scientific endeavor; without it, you may have an interesting observation, but not a scientific fact. (Yes, I know, the investigation of some weird things that happen only once is still science. But a psychology study does not a Comet Shoemaker–Levy 9 make)

Results: 64% of the studies failed the replication test. Namely, 74% social psychology studies and 50% cognitive psychology studies failed to show significant results as originally published.

What does it mean? That the researchers intentionally faked their results? Not at all. Most likely the effects were very subtle and they were inflated by reporting biases fueled by the academic pressure and the journals’ policy to publish only positive results. Is this a plague that affects only psychology? Again, not at all; be on the lookout for a similar endeavor in cancer research and rumor has it that the preliminary results are equally scary.

There would be more to say, but I will leave you in the eloquent words of the authors themselves (p. aac4716-7):

“Humans desire certainty, and science infrequently provides it. […]. Accumulating evidence is the scientific community’s method of self-correction and is the best available option for achieving that ultimate goal: truth.”

Reference: Open Science Collaboration (28 August 2015). PSYCHOLOGY. Estimating the reproducibility of psychological science. Science, 349(6251):aac4716. doi: 10.1126/science.aac4716. Article | PDF | Science Cover | The Guardian cover | IFLS cover | Decision Science cover

By Neuronicus, 13 October 2015

Choose: God or reason

Photo Credit: Anton Darcy
Photo Credit: Anton Darcy

There are two different ways to problem-solving and decision-making: the intuitive style (fast, requires less cognitive resources and effort, relies heavily on implicit assumptions) and the analytic style (involves effortful reasoning, is more time-consuming, and tends to assess more aspects of a problem).

Pennycook et al. (2012) wanted to find out if the propensity for a particular type of reasoning can be used to predict one’s religiosity. They tested 223 subjects on their cognitive style and religiosity (religious engagement, religious belief, and theistic belief). The tests were in the form of questionnaires.

They found that the more people were willing to do analytic reasoning, the less likely they were to believe in God and other supernatural phenomena (witchcraft, ghosts, etc.). And that is because, the authors argue, the people that are engaging in analytic reasoning do not accept as easily ideas without putting effort into scrutinizing them; if the notions submitted to analyses are found to violate natural laws, then they are rejected. On the other hand, intuitive reasoning is based, partly, on stereotypical assumptions that hinder the application of logical thinking and therefore the intuitive mind is more likely to accept supernatural explanations of the natural world. For example, here is one of the problems used to asses analytical thinking versus stereotypical thinking:

In a study 1000 people were tested. Among the participants there were 995 nurses and 5 doctors.
Jake is a randomly chosen participant of this study. Jake is 34 years old. He lives in a beautiful home in a posh suburb. He is well spoken and very interested in politics. He invests a lot of time in his career. What is most likely?
(a) Jake is a nurse.
(b) Jake is a doctor.

Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)
Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)

First thing that comes to mind, based on stereotypical beliefs about these professions, is that Jake is a doctor, but a simple calculation tells you that there is 99.5% chance for Jake to be a nurse. Answer a) denotes analytical thinking, answer b) denotes stereotypical thinking.

And yet that is not the most striking thing about the results, but that the perception of God changes with the score on analytical thinking (see Fig. 1): the better you scored at analytical thinking the less conformist and more abstract view you’d have about God. The authors replicated their results on 267 additional more people. The findings were still robust and independent of demographic data.

Reference: Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (June 2012, Epub 4 Apr 2012.). Analytic cognitive style predicts religious and paranormal belief. Cognition, 123(3): 335-46. doi: 10.1016/j.cognition.2012.03.003.  Article | PPT | full text PDF via Research Gate

by Neuronicus, 1 October 2015