Education raises intelligence

Intelligence is a dubious concept in psychology and biology because it is difficult to define. In any science, something has a workable definition when it is described by unique testable operations or observations. But “intelligence” had eluded that workable definition, having gone through multiple transformations in the past hundred years or so, perhaps more than any other psychological construct (except “mind”). Despite Binet’s first claim more than a century ago that there is such a thing as IQ and he has a way to test for it, many psychologists and, to a lesser extent, neuroscientists are still trying to figure out what it is. Neuroscientists to a lesser extent because once the field as a whole could not agree upon a good definition, it moved on to the things that they can agree upon, i.e. executive functions.

Of course, I generalize trends to entire disciplines and I shouldn’t; not all psychology has a problem with operationalizations and replicability, just as not all neuroscientists are paragons of clarity and good science. In fact, the intelligence research seems to be rather vibrant, judging by the publications number. Who knows, maybe the psychologists have reached a consensus about what the thing is. I haven’t truly kept up with the IQ research, partly because I think the tests used for assessing it are flawed (therefore you don’t know what exactly you are measuring) and tailored for a small segment of the population (Western society, culturally embedded, English language conceptualizations etc.) and partly because the circularity of definitions (e.g. How do I know you are highly intelligent? You scored well at IQ tests. What is IQ? What the IQ tests measure).

But the final nail in the coffin of intelligence research for me was a very popular definition of Legg & Hutter in 2007: intelligence is “the ability to achieve goals”. So the poor, sick, and unlucky are just dumb? I find this definition incredibly insulting to the sheer diversity within the human species. Also, this definition is blatantly discriminatory, particularly towards the poor, whose lack of options, access to good education or to a plain healthy meal puts a serious brake on goal achievement. Alternately, there are people who want for nothing, having been born in opulence and fame but whose intellectual prowess seems to be lacking, to put it mildly, and owe their “goal achievement” to an accident of birth or circumstance. The fact that this definition is so accepted for human research soured me on the entire field. But I’m hopeful that the researchers will abandon this definition more suited for computer programs than for human beings; after all, paradigmatic shifts happen all the time.

In contrast, executive functions are more clearly defined. The one I like the most is that given by Banich (2009): “the set of abilities required to effortfully guide behavior toward a goal”. Not to achieve a goal, but to work toward a goal. With effort. Big difference.

So what are those abilities? As I said in the previous post, there are three core executive functions: inhibition/control (both behavioral and cognitive), working memory (the ability to temporarily hold information active), and cognitive flexibility (the ability to think about and switch between two different concepts simultaneously). From these three core executive functions, higher-order executive functions are built, such as reasoning (critical thinking), problem solving (decision-making) and planning.

Now I might have left you with the impression that intelligence = executive functioning and that wouldn’t be true. There is a clear correspondence between executive functioning and intelligence, but it is not a perfect correspondence and many a paper (and a book or two) have been written to parse out what is which. For me, the most compelling argument that executive functions and whatever it is that the IQ tests measure are at least partly distinct is that brain lesions that affect one may not affect the other. It is beyond the scope of this blogpost to analyze the differences and similarities between intelligence and executive functions. But to clear up just a bit of the confusion I will say this broad statement: executive functions are the foundation of intelligence.

There is another qualm I have with the psychological research into intelligence: a big number of psychologists believe intelligence is a fixed value. In other words, you are born with a certain amount of it and that’s it. It may vary a bit, depending on your life experiences, either increasing or decreasing the IQ, but by and large you’re in the same ball-park number. In contrast, most neuroscientists believe all executive functions can be drastically improved with training. All of them.

After this much semi-coherent rambling, here is the actual crux of the post: intelligence can be trained too. Or I should say the IQ can be raised with training. Ritchie & Tucker-Drob (2018) performed a meta-analysis looking at over 600,000 healthy participants’ IQ and their education. They confirmed a previously known observation that people who score higher at IQ tests complete more years of education. But why? Is it because highly intelligent people like to learn or because longer education increases IQ? After carefully and statistically analyzing 42 studies on the subject, the authors conclude that the more educated you are, the more intelligent you become. How much more? About 1 to 5 IQ points per 1 additional year of education, to be precise. Moreover, this effect persists for a lifetime; the gain in intelligence does not diminish with the passage of time or after exiting school.

This is a good paper, its conclusions are statistically robust and consistent. Anybody can check it out as this article is an open access paper, meaning that not only the text but its entire raw data, methods, everything about it is free for everybody.

155 education and iq

For me, the conclusion is inescapable: if you think that we, as a society, or you, as an individual, would benefit from having more intelligent people around you, then you should support free access to good education. Not exactly where you thought I was going with this, eh ;)?

REFERENCE: Ritchie SJ & Tucker-Drob EM. (Aug, 2018, Epub 18 Jun 2018). How Much Does Education Improve Intelligence? A Meta-Analysis. Psychological Science, 29(8):1358-1369. PMID: 29911926, PMCID: PMC6088505, DOI: 10.1177/0956797618774253. ARTICLE | FREE FULLTEXT PDF | SUPPLEMENTAL DATA  | Data, codebooks, scripts (Mplus and R), outputs

Nota bene: I’d been asked what that “1 additional year” of education means. Is it with every year of education you gain up to 5 IQ points? No, not quite. Assuming I started as normal IQ, then I’d be… 26 years of education (not counting postdoc) multiplied by let’s say 3 IQ points, makes me 178. Not bad, not bad at all. :))). No, what the authors mean is that they had access to, among other datasets, a huge cohort dataset from Norway from the moment when they increased the compulsory education by 2 years. So the researchers could look at the IQ tests of the people before and after the policy change, which were administered to all males at the same age when they entered compulsory military service. They saw the increase in 1 to 5 IQ points per each extra 1 year of education.

By Neuronicus, 14 July 2019

Apparently, scientists don’t know the risks & benefits of science

If you want to find out how bleach works or what keeps the airplanes in the air or why is the rainbow the same sequence of colors or if it’s dangerous to let your kid play with snails would you ask a scientist or your local priest?

The answer is very straightforward for most of the people. Just that for a portion of the people the straightforwardness is viewed by the other portion as corkscrewedness. Or rather just plain dumb.

Cacciatore et al. (2016) asked about 5 years ago 2806 American adults how much they trust the information provided by religious organizations, university scientists, industry scientists, and science/technology museums. They also asked them about their age, gender, race, socioeconomic status, income as well as about Facebook use, religiosity, ideology, and attention to science-y content.

Almost 40% of the sample described themselves as Evangelical Christians, one of the largest religious group in USA. These people said they trust more their religious organizations then scientists (regardless of who employs these scientists) to tell the truth about the risks and benefits of technologies and their applications.

The data yielded more information, like the fact that younger, richer, liberal, and white people tended to trust scientists more then their counterparts. Finally, Republicans were more likely to report a religious affiliation than Democrats.

I would have thought that everybody would prefer to take advice about science from a scientist. Wow, what am I saying, I just realized what I typed… Of course people are taking health advice from homeopaths all the time, from politicians rather than environment scientists, from alternative medicine quacks than from doctors, from no-college educated than geneticists. From this perspective then, the results of this study are not surprising, just very very sad… I just didn’t think that the gullible people can also be grouped by political affiliations. I though the affliction is attacking both sides of an ideological isle in a democratic manner.

Of course, this is a survey study, therefore a lot more work is needed to properly generalize these results, from expanding the survey sections (beyond the meager 1 or 2 questions per section) to validation and replication. Possibly, even addressing different aspects of science because, for instance, climate change is a much more touchy subject than, say, apoptosis. And replace or get rid of the “Scientists know best what is good for the public” item; seriously, I don’t know any scientist, including me, who would answer yes to that question. Nevertheless, the trend is, like I said, sad.

107-copy

Reference:  Cacciatore MA, Browning N, Scheufele DA, Brossard D, Xenos MA, & Corley EA. (Epub ahead of print 25 Jul 2016). Opposing ends of the spectrum: Exploring trust in scientific and religious authorities. Public Understanding of Science. PMID: 27458117, DOI: 10.1177/0963662516661090. ARTICLE | NPR cover

By Neuronicus, 7 December 2016

Save

Save

The oldest known anatomically modern humans in Europe

A couple of days ago, on December 1st, was the National Day of Romania, a small country in the South-East of Europe. In its honor, I dug out a paper that shows that some of the earliest known modern humans in Europe were also… dug out there.

Trinkaus et al. (2003) investigated the mandible of an individual found in 2002 by a Romanian speological expedition in Peștera cu Oase (the Cave with Bones), one of the caves in the SouthWest of the country, not far from where Danube meets the Carpathians.

First the authors did a lot of very fine measurement of various aspects of the jaw, including the five teeth, and then compared them with those found in other early humans and Neanderthals. The morphological features place the Oase 1 individual as an early modern human with some Neanderthal features. The accelerator mass spectrometry radiocarbon (14C) direct dating makes him the oldest early modern human discovered to that date in Europe; he’s 34,000–36,000 year old. I’m assuming is a he for no particular reason; the paper doesn’t specify anywhere whether they know the jaw owner’s gender and age. A later paper (Fu et al., 2015) says Oase 1 is even older: 37,000–42,000-year-old.

After this paper it seemed to be a race to see what country can boast to have the oldest human remains on its territory. Italy and UK successfully reassessed their own previous findings thusly: UK has a human maxilla that was incorrectly dated in 1989 but new dating makes it 44,200–39,000 year old, carefully titling their paper “The earliest evidence for anatomically modern humans in northwestern Europe” (Higham et al., 2011) while Italy’s remains that they thought for decades to be Neanderthal turned out to be 45,000-43,000 years old humans, making “the Cavallo human remains […] the oldest known European anatomically modern humans” (Benmazzi et al., 2011).

I wonder what prompted the sudden rush in reassessing the old untouched-for-decades fossils… Probably good old fashioned national pride. Fair enough. Surely it cannot have anything to do with the disdain publicly expressed by some Western Europe towards Eastern Europe, can it? Surely scientists are more open minded than some petty xenophobes, right?

Well, the above thought wouldn’t have even crossed my mind, nor would I have noticed that the Romanians’ discovery has been published in PNAS and the others in Nature, had it not been for the Fu et al. (2015) paper, also published in Nature. This paper does a genetic analysis of the Oase 1 individual and through some statistical inferences that I will not pretend to fully understand they arrive to two conclusions. First, Oase 1 had a “Neanderthal ancestor as recently as four to six generations back”. OK. Proof of interbreeding, nothing new here. But the second conclusion I will quote in full: “However, the Oase individual does not share more alleles with later Europeans than with East Asians, suggesting that the Oase population did not contribute substantially to later humans in Europe.”

Now you don’t need to know much about statistics or about basic logic either to know that from 1 (one) instance alone you cannot generalize to a whole population. That particular individual from the Oase population hasn’t contributed to later humans in Europe, NOT the entire population. Of course it is possible that that is the case, but you cannot scientifically draw that conclusion from one instance alone! This is in the abstract, so everybody can see this, but I got access to the whole paper, which I have read in the hopes against hope that maybe I’m missing something. Nope. The authors did not investigate any additional DNA and they reiterate that the Oase population did not contribute to modern-day Europeans. So it’s not a type-O. From the many questions that are crowding to get out like ‘How did it get past reviewers?’, ‘Why was it published in Nature (interesting paper, but not that interesting, we knew about interbreeding so what makes it so new and exciting)?’, the one that begs to be asked the most is: ‘Why would they say this, when stating the same thing about the Oase 1 individual instead about the Oase population wouldn’t have diminished their paper in any way?’ .

I must admit that I am getting a little paranoid in my older age. But with all the hate that seems to come out and about these days EVERYWHERE towards everything that is “not like me” and “I don’t want it to be like me”, one cannot but wonder… Who knows, maybe it is really just as simple as an overlooked mistake or some harmless national pride so all is good and life goes on, especially since the authors of all four papers discussed above are from various countries and institutions all across the Globe. Should that be the case, I offer my general apologies for suspecting darker motives behind these papers, but I’m not holding my breath.

106-copy

References:

1) Trinkaus E, Moldovan O, Milota S, Bîlgăr A, Sarcina L, Athreya S, Bailey SE, Rodrigo R, Mircea G, Higham T, Ramsey CB, & van der Plicht J. (30 Sep 2003, Epub 22 Sep 2003). An early modern human from the Peştera cu Oase, Romania. Proceedings of the National Academy of Sciences U S A,  100(20):11231-11236. PMID: 14504393, PMCID: PMC208740, DOI: 10.1073/pnas.2035108100. ARTICLE  | FREE FULLTEXT PDF

 2) Higham T, Compton T, Stringer C, Jacobi R, Shapiro B, Trinkaus E, Chandler B, Gröning F, Collins C, Hillson S, O’Higgins P, FitzGerald C, & Fagan M. (2 Nov 2011). The earliest evidence for anatomically modern humans in northwestern Europe. Nature. 479(7374):521-4. PMID: 22048314, DOI: 10.1038/nature10484. ARTICLE | FULLTEXT PDF via ResearchGate

3) Benazzi S, Douka K, Fornai C, Bauer CC, Kullmer O, Svoboda J, Pap I, Mallegni F, Bayle P, Coquerelle M, Condemi S, Ronchitelli A, Harvati K, & Weber GW. (2 Nov 2011). Early dispersal of modern humans in Europe and implications for Neanderthal behaviour. Nature, 479(7374):525-8. PMID: 22048311, DOI: 10.1038/nature10617. ARTICLE | FULLTEXT PDF via ResearchGate

4) Fu Q, Hajdinjak M, Moldovan OT, Constantin S, Mallick S, Skoglund P, Patterson N, Rohland N, Lazaridis I, Nickel B, Viola B, Prüfer K, Meyer M, Kelso J, Reich D, & Pääbo S. (13 Aug 2015, Epub 22 Jun 2015). An early modern human from Romania with a recent Neanderthal ancestor. Nature. 524(7564):216-9. PMID: 26098372, PMCID: PMC4537386, DOI:10.1038/nature14558. ARTICLE | FREE FULLTEXT PDF

By Neuronicus, 3 December 2016

Save

Save

Save

Save

Save

Pic of the Day: Russell on stupid

russell-copy-2

Reference: Russell, B. (10 May 1933). “The Triumph of Stupidity”. In: H. Ruja (Ed.), Mortals and Others: Bertrand Russell’s American Essays, Volume 2, 1931–1935.

The history of the quote and variations of it by others can be found on the Quote Investigator.

By Neuronicus, 6 November 2016

Who invented optogenetics?

Wayne State University. Ever heard of it? Probably not. How about Zhuo-Hua Pan? No? No bell ringing? Let’s try a different approach: ever heard of Stanford University? Why, yes, it’s one of the most prestigious and famous universities in the world. And now the last question: do you know who Karl Deisseroth is? If you’re not a neuroscientist, probably not. But if you are, then you would know him as the father of optogenetics.

Optogenetics is the newest tool in the biology kit that allows you to control the way a cell behaves by shining a light on it (that’s the opto part). Prior to that, the cell in question must be made to express a protein that is sensitive to light (i.e. rhodopsin) either by injecting a virus or breeding genetically modified animals that express that protein (that’s the genetics part).

If you’re watching the Nobel Prizes for Medicine, then you would also be familiar with Deisseroth’s name as he may be awarded the Nobel soon for inventing optogenetics. Only that, strictly speaking, he did not. Or, to be fair and precise at the same time, he did, but he was not the first one. Dr. Pan from Wayne State University was. And he got scooped.98.png

The story is at length imparted to us by Anna Vlasits in STAT and republished in Scientific American. In short, Dr. Pan, an obscure name in an obscure university from an ill-famed city (Detroit), does research for years in an unglamorous field of retina and blindness. He figured, quite reasonably, that restoring the proteins which sense light in the human eye (i.e. photoreceptor proteins) could restore vision in the congenitally blind. The problem is that human photoreceptor proteins are very complicated and efforts to introduce them into retinas of blind people have proven unsuccessful. But, in 2003, a paper was published showing how an algae protein that senses light, called channelrhodopsin (ChR), can be expressed into mammalian cells without loss of function.

So, in 2004, Pan got a colleague from Salus University (if Wayne State University is a medium-sized research university, then Salus is a really tiny, tiny little place) to engineer a ChR into a virus which Pan then injected in rodent retinal neurons, in vivo. After 3-4 weeks he obtained the expression of the protein and the expression was stable for at least 1 year, showing that the virus works nicely. Then his group did a bunch of electrophysiological recordings (whole cell patch-clamp and voltage clamp) to see if shining light on those neurons makes them fire. It did. Then, they wanted to see if ChR is for sure responsible for this firing and not some other proteins so they increased the intensity of the blue light that their ChR is known to sense and observed that the cell responded with increased firing. Now that they saw the ChR works in normal rodents, next they expressed the ChR by virally infecting mice who were congenitally blind and repeated their experiments. The electrophysiological experiments showed that it worked. But you see with your brain, not with your retina, so the researchers looked to see if these cells that express ChR project from the retina to the brain and they found their axons in lateral geniculate and superior colliculus, two major brain areas important for vision. Then, they recorded from these areas and the brain responded when blue light, but not yellow or other colors, was shone on the retina. The brain of congenitally blind mice without ChR does not respond regardless of the type of light shone on their retinas. But does that mean the mouse was able to see? That remains to be seen (har har) in future experiments. But the Pan group did demonstrate – without question or doubt – that they can control neurons by light.

All in all, a groundbreaking paper. So the Pan group was not off the mark when they submitted it to Nature on November 25, 2004. As Anna Vlasits reports in the Exclusive, Nature told Pan to submit to a more specialized journal, like Nature Neuroscience, which then rejected it. Pan submitted then to the Journal of Neuroscience, which also rejected it. He submitted it then to Neuron on November 29, 2005, which finally accepted it on February 23, 2006. Got published on April 6, 2006. Deisseroth’s paper was submitted to Nature Neuroscience on May 12, 2005, accepted on July, and published on August 14, 2005… His group infected rat hippocampal neurons cultured in a Petri dish with a virus carrying the ChR and then they did some electrophysiological recordings on those neurons while shining lights of different wavelengths on them, showing that these cells can be controlled by light.

There’s more on the saga with patent filings and a conference where Pan showed the ChR data in May 2005 and so on, you can read all about it in Scientific American. The magazine is just hinting to what I will say outright, loud and clear: Pan didn’t get published because of his and his institution’s lack of fame. Deisseroth did because of the opposite. That’s all. This is not about squabbles about whose work is more elegant, who presented his work as a scientific discovery or a technical report or whose title is more catchy, whose language is more boisterous or native English-speaker or luck or anything like that. It is about bias and, why not?, let’s call a spade a spade, discrimination. Nature and Journal of Neuroscience are not caught doing this for the first time. Not by a long shot. The problem is that they are still doing it, that is: discriminating against scientific work presented to them based on the name of the authors and their institutions.

Personally, so I don’t get comments along the lines of the fox and the grapes, I have worked at both high profile and low profile institutions. And I have seen the difference not in the work, but in the reception.

That’s my piece for today.

Source:  STAT, Scientific American.

References:

1) Bi A, Cui J, Ma YP, Olshevskaya E, Pu M, Dizhoor AM, & Pan ZH (6 April 2006). Ectopic expression of a microbial-type rhodopsin restores visual responses in mice with photoreceptor degeneration. Neuron, 50(1): 23-33. PMID: 16600853. PMCID: PMC1459045. DOI: 10.1016/j.neuron.2006.02.026. ARTICLE | FREE FULLTEXT PDF

2) Boyden ES, Zhang F, Bamberg E, Nagel G, & Deisseroth K. (Sep 2005, Epub 2005 Aug 14). Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience, 8(9):1263-1268. PMID: 16116447. DOI: 10.1038/nn1525. doi:10.1038/nn1525. ARTICLE 

By Neuronicus, 11 September 2016

Save

Save

Save

Save

Now, isn’t that sweet?

80sugar - Copy

When I opened one of my social media pages today, I saw a message from a friend of mine which was urging people to not believe everything they read, particularly when it comes to issues like safety and health. Instead, one should go directly at the original research articles on a particular issue. In case the reader is not familiar with the scientific jargon, the message was accompanied by one of the many very useful links to blogs that teach a non-scientist how to cleverly read a scientific paper without any specific science training.

Needless to say, I had to spread the message, as I believe in it wholeheartedly. All good and well, but what happens when you encounter two research papers with drastically opposite views on the same topic? What do you do then? Who do you believe?

So I thought pertinent to tell you my short experience with one of these issues and see if we can find a way out of this conundrum. A few days ago, the British Chancellor of the Exchequer (the rough equivalent of a Secretary of the Treasury or Minister of Finance in other countries) announced the introduction of a new tax on sugary drinks: the more sugar a company puts in its drinks, the more taxes it would pay. In his speech announcing the law, Mr. George Osborne was saying that the reason for this law is that there is a positive association between sugar consumption and obesity, meaning the more sugar you eat, the fatter you get. Naturally, he did not cite any studies (he would be a really odd politician if he did so).

Therefore, I started looking for these studies. As a scientist, but not a specialist in nutrition, the first thing I did was searching for reviews on the association between sugar consumption and obesity on peer-reviewed databases (like the Nature journals, the US NIH Library of Medicine, and the Stanford Search Engine). My next step would have been skimming a handful of reviews and then look at their references and select some dozens or so of research papers and read those. But I didn’t get that far and here is why.

At first glance (that is, skimming about a hundred abstracts or so), it seems there are overwhelmingly more papers out there that say there is a positive correlation between sugar intake and obesity in both children and adults. But, when looking at reviews, there are plenty of reviews on both sides of the issue! Usually, the reviews tend to reflect the compounded data, that’s what they are for and that’s why is a good idea to start with a review on a subject, if one knows nothing about it. So this dissociation between research data and reviews seemed suspicious. Among the reviews in question, the ones that seemed more systematic than others are this one and this one, with obvious opposite conclusions.

And then, instead of going for the original research and leave the reviews alone, I did something I am trying like hell not to do: I looked the authors and their affiliations up. Those who follow my blog might have noticed that very rarely do I mention where the research has taken place and, except in the Reference section, I almost never mention the name of the journal where the research was published in the main body of the text. And I do this quite intentionally as I am trying – and urge the readers to do the same thing – to not judge the book by the cover. That is, not forming a priori expectations based on the fame/prestige (or lack thereof) of the institution or journal in which the research was conducted and published, respectively. Judge the work by its value, not by its authors; and this paid off many times during my career, as I have seen crappy-crappity-crap papers published in Nature or Science, bloopers of cosmic proportions coming from NASA (see arsenic-DNA incorporation), or really big names screwing up big time. On the other hand, I have seen some quite interesting work, admittedly rare, done in Thailand, Morocco or other countries not known for their expensive research facilities.

But even in research the old dictum “follow the money” is, unfortunately, valid. Because a quick search showed that most of the nay-sayers (i.e. sugar does not cause weight gain) were 1) from USA and 2) had been funded by the food and beverage industry. Luckily for everybody, enter the scene: Canada. Leave it for the Canadians to set things straight. In other words, a true rara avis poked its head amidst this controversy: a meta-review. Lo and behold – a review of reviews! Massougbodji et al. (2014) found all sorts of things, from the lack of consensus on the strength of the evidence on causality to the quality of these reviews. But the one finding that was interesting to me was:

“reviews funded by the industry were less likely to conclude that there was a strong association between sugar-sweetened beverages consumption and obesity/weight gain” (p. 1103).

In conclusion, I would add a morsel of advice to my friend’s message: in addition to looking up the original research on a topic, also look where the money funding that research is coming from. Money with no strings attached usually comes only from governments. Usually is the word, there may be exceptions, I am sure I am not well-versed in the behind-the-scenes money politics. But if you see Marlboro paying for “research” that says smoking is not causing lung cancer or the American Beverage Association funding studies to establish daily intake limits for high-fructose corn syrup, for sure you should cock an eyebrow before reading further.

Reference: Massougbodji J, Le Bodo Y, Fratu R, & De Wals P (2014). Reviews examining sugar-sweetened beverages and body weight: correlates of their quality and conclusions. The American Journal of Clinical Nutrition, 99:1096–1104. doi: 10.3945/ajcn.113.063776. Article | FREE PDF

By Neuronicus, 20 March 2016

Yeast can make morphine

poppy - Copy

Opiates like morphine and heroin can be made at home by anybody with a home beer-brewing kit and the right strain of yeast. In 2015, two published papers and a Ph.D. dissertation described the relatively easy way to convince yeast to make morphine from sugar (the links are provided in the Reference paper). That is the bad news.

The good news is that scientists have been policing themselves (well, most of them, anyway) long before regulations are put in place to deal with technological advancements by, for example, limiting access to the laboratory, keeping things under lock and key, publishing incomplete data, and generally being very careful with what they’re doing.

Complementing this behavior, an article published by Oye et al. (2015) outlines other measures that can be put in place so that this new piece of knowledge doesn’t increase the accessibility to opiates, thereby increasing the number of addicts, which is estimated to more than 16 million people worldwide. For example, researchers can make the morphine-producing yeast dependent on unusual nutrients or engineer the existing strain to produce less-marketable varieties of opiates or prohibit the access to made-to-order DNA sequences for this type of yeast and so on.

You may very well ask “Why did the scientists made this kind of yeast anyway?”. Because some medicines are either very expensive or laborious to produce by the pharmaceutical companies, the researchers have sought a method to make these drugs more easily and cheaply by engineering bacteria, fungi, or plants to produce them for us. Insulin is a good example of an expensive and hard-to-get-by drug that we managed to engineer yeast strains to produce it for us. And opiates are still the best analgesics out there.

Reference: Oye KA, Lawson JC, & Bubela T (21 May 2015). Drugs: Regulate ‘home-brew’ opiates. Nature, 521(7552):281-3. doi: 10.1038/521281a. Article | FREE Fulltext PDF

By Neuronicus, 2 January 2016

Dead salmon engaged in human perspective-taking, uncorrected fMRI study reports

salmon

Subject. One mature Atlantic Salmon (Salmo salar) participated in the fMRI study. The salmon was approximately 18 inches long, weighed 3.8 lbs, and was not alive at the time of scanning.

Task. The task administered to the salmon involved completing an open-ended mentalizing task. The salmon was shown a series of photographs depicting human individuals in social situations with a specified emotional valence. The salmon was asked to determine what emotion the individual in the photo must have been experiencing.”

Before explaining why you read what you just read and if it’s true (it is!), let me tell you that for many people, me included, the imaging studies seem very straightforward compared to, say, immunohistochemistry protocols. I mean, what do you have to do? You stick a human in a big scanner (fMRI, PET, or what-have-you), you start the image acquisition software and then some magic happens and you get pretty pictures of the human brain on your computer associated with some arbitrary numbers. Then you tell the humans to do something and you acquire more images which come with a different set of numbers. Finally, you compare the two sets of numbers and voila!: the neural correlates of whatever. Easy-peasy.

Well, it turns out it’s not so easy-peasy. Those numbers correspond to voxels, something like a pixel only 3D; a voxel is a small cube of brain (with the side of, say, 2 or 3 mm) comprising of hundreds of thousands to millions of brain cells. After this division, depending on your voxel size, you end up with a whooping 40,000 to 130,000 voxels or thereabouts for one brain. So a lot of numbers to compare.

When you do so many comparisons, by chance alone, you will find some that are significant. This is nature’s perverse way to show relationships where there are none and to screw-up a PhD. Those relationships are called false positives and the more comparisons you do, the more likely it is to find something statistically significant. So, in the ’90s, when the problem became very pervasive with the staggering amount of data generated by an fMRI scan, researchers came up with mathematical ways to dodge the problem, called multiple comparisons corrections (like application of the Gaussian Random Field Theory). Unfortunately, even 20 years later one could still find imaging studies with uncorrected results.

To show how important it is to perform that statistical correction, Bennet et al. (2010) did an fMRI study on perspective taking on one subject: a salmon. The subject was dead at the time of scanning. Now you can re-read the above excerpt from the Methods section.

Scroll down a bit to the Results section: “Out of a search volume of 8064 voxels a total of 16 voxels were significant”, p(uncorrected) < 0.001, showing that the salmon was engaging in active perspective-taking.

After the multiple comparisons correction, no voxel lit up, meaning that the salmon was not really imagining what the humans are feeling. Bummer…

The study has been extensively covered by media and I jumped on that wagon too – even if a bit late – because I never tire of this study as it’s absolutely funny and timeless. The authors even received the 2012 IgNobel prize for Neuroscience, as justly deserved. I refrained from fish puns because there are aplenty in the links I provided for you after the Reference. Feel free to come up with your own. Enjoy!

Reference: Bennett, CM, Baird AA, Miller MB & Wolford GL (2010). Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: An argument for multiple comparisons correction. Journal of Serendipitous Unexpected Results, 1, 1–5, presented as poster at the 2009 Human Brain Mapping conference. PDF | Nature cover | Neuroskeptic cover | Scientific American full story

By Neuronicus, 23 November 2015

Is religion turning perfectly normal children into selfish, punitive misanthropes? Seems like it.

Screenshot from
Screenshot from “Children of the Corn” (Director: Fritz Kiersch, 1984)

The main argument that religious people have against atheism or agnosticism is that without a guiding deity and a set of behaving rules, how can one trust a non-religious person to behave morally? In other words, there is no incentive for the non-religious to behave in a societally accepted manner. Or so it seemed. Past tense. There has been some evidence showing that, contrary to expectations, non-religious people are less prone to violence and deliver more lenient punishments as compared to religious people. Also, the non-religious show equal charitable behaviors as the religious folks, despite self-reporting of the latter to participate in more charitable acts. But these studies were done with adults, usually with non-ecological tests. Now, a truly first-of-its-kind study finds something even more interesting, that calls into question the fundamental basis of Christianity’s and Islam’s moral justifications.

Decety et al. (2015) administered a test of altruism and a test of moral sensitivity to 1170 children, aged 5-12, from the USA, Canada, Jordan, Turkey, and South Africa. Based on parents’ reports about their household practices, the children had been divided into 280 Christian, 510 Muslim, and 323 Not Religious (the remaining 57 children belonged to other religions, but were not included in the analyses due to lack of statistical power). The altruism test consisted in letting children choose their favorite 10 out of 30 stickers to be theirs to keep, but because there aren’t enough stickers for everybody, the child could give some of her/his stickers to another child, not so fortunate as to play the sticker game (the researcher would give the child privacy while choosing). Altruism was calculated as the number of stickers given to the fictive child. In the moral sensitivity task, children watched 10 videos of a child pushing, shoving etc. another child, either intentionally or accidentally and then the children were asked to rate the meanness of the action and to judge the amount of punishment deserved for each action.

And.. the highlighted results are:

  1. “Family religious identification decreases children’s altruistic behaviors.
  2. Religiousness predicts parent-reported child sensitivity to injustices and empathy.
  3. Children from religious households are harsher in their punitive tendencies.”
Current Biology DOI: (10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd
From Current Biology (DOI: 10.1016/j.cub.2015.09.056). Copyright © 2015 Elsevier Ltd. NOTE: ns. means non-significant difference.

Parents’ educational level did not predict children’s behavior, but the level of religiosity did: the more religious the household, the less altruistic, more judgmental, and delivering harsher punishments the children were. Also, in stark contrast with the actual results, the religious parents viewed their children as more emphatic and sensitive to injustices as compared to the non-religious parents. This was a linear relationship: the more religious the parents, the higher the self-reports of socially desirable behavior, but the lower the child’s empathy and altruism objective scores.

Childhood is an extraordinarily sensitive period for learning desirable social behavior. So… is religion really turning perfectly normal children into selfish, vengeful misanthropes? What anybody does at home is their business, but maybe we could make a secular schooling paradigm mandatory to level the field (i.e. forbid religion teachings in school)? I’d love to read your comments on this.

Reference: Decety J, Cowell JM, Lee K, Mahasneh R, Malcolm-Smith S, Selcuk B, & Zhou X. (16 Nov 2015, Epub 5 Nov 2015). The Negative Association between Religiousness and Children’s Altruism across the World. Current Biology. DOI: 10.1016/j.cub.2015.09.056. Article | FREE PDF | Science Cover

By Neuronicus, 5 November 2015

Choose: God or reason

Photo Credit: Anton Darcy
Photo Credit: Anton Darcy

There are two different ways to problem-solving and decision-making: the intuitive style (fast, requires less cognitive resources and effort, relies heavily on implicit assumptions) and the analytic style (involves effortful reasoning, is more time-consuming, and tends to assess more aspects of a problem).

Pennycook et al. (2012) wanted to find out if the propensity for a particular type of reasoning can be used to predict one’s religiosity. They tested 223 subjects on their cognitive style and religiosity (religious engagement, religious belief, and theistic belief). The tests were in the form of questionnaires.

They found that the more people were willing to do analytic reasoning, the less likely they were to believe in God and other supernatural phenomena (witchcraft, ghosts, etc.). And that is because, the authors argue, the people that are engaging in analytic reasoning do not accept as easily ideas without putting effort into scrutinizing them; if the notions submitted to analyses are found to violate natural laws, then they are rejected. On the other hand, intuitive reasoning is based, partly, on stereotypical assumptions that hinder the application of logical thinking and therefore the intuitive mind is more likely to accept supernatural explanations of the natural world. For example, here is one of the problems used to asses analytical thinking versus stereotypical thinking:

In a study 1000 people were tested. Among the participants there were 995 nurses and 5 doctors.
Jake is a randomly chosen participant of this study. Jake is 34 years old. He lives in a beautiful home in a posh suburb. He is well spoken and very interested in politics. He invests a lot of time in his career. What is most likely?
(a) Jake is a nurse.
(b) Jake is a doctor.

Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)
Fig. 1 from Pennycook et al. (2012) depicting the relationship between the analytical thinking score (horizontal) and percentage of people that express a type of theistic belief (vertical). E.g. 55% of people that believe in a personal God scored 0 out of 3 at the analytical thinking test (first bar), whereas atheists were significantly more likely to answer all 3 questions correctly (last bar)

First thing that comes to mind, based on stereotypical beliefs about these professions, is that Jake is a doctor, but a simple calculation tells you that there is 99.5% chance for Jake to be a nurse. Answer a) denotes analytical thinking, answer b) denotes stereotypical thinking.

And yet that is not the most striking thing about the results, but that the perception of God changes with the score on analytical thinking (see Fig. 1): the better you scored at analytical thinking the less conformist and more abstract view you’d have about God. The authors replicated their results on 267 additional more people. The findings were still robust and independent of demographic data.

Reference: Pennycook, G., Cheyne, J. A., Seli, P., Koehler, D. J., & Fugelsang, J. A. (June 2012, Epub 4 Apr 2012.). Analytic cognitive style predicts religious and paranormal belief. Cognition, 123(3): 335-46. doi: 10.1016/j.cognition.2012.03.003.  Article | PPT | full text PDF via Research Gate

by Neuronicus, 1 October 2015

Zap the brain to get more lenient judges

Neuronicus3
Reference: Buckholtz et al. (2015). Credit: Neuronicus

Humans manage to live successfully in large societies mainly because we are able to cooperate. Cooperation rests on commonly agreed rules and, equally important, the punishment bestowed upon their violators. Researchers call this norm enforcement, while the rest of us call it simply justice, whether it is delivered in its formal way (through the courts of law) or in a more personal manner (shout at the litterer, claxon the person who cut in your lane etc.). It is a complicate process to investigate, but scientists managed to break it into simpler operations: moral permissibility (what is the rule), causal responsibility (did John break the rule), moral responsibility (did John intend to break the rule, also called blameworthiness or culpability), harm assessment (how much harm resulted from John breaking the rule) and sanction (give the appropriate punishment to John). Different brain parts deal with different aspects of norm enforcement.

The approximate area where the stimulation took place. Note that the picture depicts the left hemisphere, whereas the low punishment judgement occured when the stimulation was mostly on the right hemisphere.
The approximate area where the stimulation took place. Note that the picture depicts the left hemisphere, whereas the low punishment judgement occurred when the stimulation was mostly on the right hemisphere.

Using functional magnetic resonance imaging (fMRI), Buckholtz et al. found out that the dorsolateral prefrontal cortex (DLPFC) gets activated when 60 young subjects decided what punishment fits a crime. Then, they used repetitive transcranial magnetic stimulation (rTMS), which is a non-invasive way to disrupt the activity of the neurons, to see what happens if you inhibit the DLPFC. The subjects made the same judgments when it came to assigning blame or assessing the harm done, but delivered lower punishments.

Reference: Buckholtz, J. W., Martin, J. W., Treadway, M. T., Jan, K., Zald, D.H., Jones, O., & Marois, R. (23 September 2015). From Blame to Punishment: Disrupting Prefrontal Cortex Activity Reveals Norm Enforcement Mechanisms. Neuron, 87: 1–12, http://dx.doi.org/10.1016/j.neuron.2015.08.023. Article + FREE PDF

by Neuronicus, 22 September 2015