This is about naming things in science. You have been warned!
The DNA is made of four nucleobases: adenine (A), thymine (T), cytosine (C) and guanine (G). The “letters” of the code. Each of them has been named from where they were originally obtained by the scientists who first identified and/or isolated them.
Adenine was named thus because it was extracted from the pancreas of an ox, which is aden in Greek (the pancreas, not the ox), by the Nobel laureate Albrecht Kossel in 1885.
Thymine comes from thymic acid, which was extracted from the thymus gland of calves by the same Albrecht Kossel and Albert Neumann in 1893.
A year later, the duo named cytosine, another base obtained from the same thymus tissue. Cyto- pertains to cells in Greek.
Fifty years before that, Julion Bodo Unger, a German chemist, extracted the guanine from the guano of sea birds. Why was he looking at bird poop, curious minds inquire? Because he was studying it for its uses as fertilizer. The year of discovery was 1844 and the year of the naming was 1846.
And now you know…
REFERENCE: Unger, JB (1846). Bemerkungen zu obiger Notiz (Comments on the above notice), Annalen der Chemie und Pharmacie, 58: 18-20. From page 20: “ … desshalb möchte ich den Namen Guanin vorschlagen, welcher an seine Herkunft erinnert.” ( “… therefore I would like to suggest the name guanine, which is reminiscent of its origin”.) (Wikipedia translation). Google Books | Google Book PDF
Valentine’s day is a day when we celebrate romantic love (well, some of us tend to) long before the famous greeting card company Hallmark was established. Fittingly, I found the perfect paper to cover for this occasion.
In the past couple of decades it became clear to scientists that there is no such thing as a mental experience that doesn’t have corresponding physical changes. Why should falling in love be any different? Several groups have already found that levels of some chemicals (oxytocin, cortisol, testosterone, nerve growth factor, etc.) change when we fall in love. There might be other changes as well. So Murray et al. (2019) decided to dive right into it and check how the immune system responds to love, if at all.
For two years, the researchers looked at certain markers in the immune system of 47 women aged 20 or so. They drew blood when the women reported to be “not in love (but in a new romantic relationship), newly in love, and out-of-love” (p. 6). Then they sent their samples to their university’s Core to toil over microarrays. Microarray techniques can be quickly summarized thusly: get a bunch of molecules of interest, in this case bits of single-stranded DNA, and stick them on a silicon plate or a glass slide in a specific order. Then you run your sample over it and what sticks, sticks, what not, not. Remember that DNA loves to be double stranded, so any single strand will stick to their counterpart, called complementary DNA. You put some fluorescent dye on your genes of interest and voilà, here you have an array of genes expressed in a certain type of tissue in a certain condition.
Talking about microarrays got me a bit on memory lane. When fMRI started to be a “must” in neuroscience, there followed a period when the science “market” was flooded by “salad” papers. We called them that because there were so many parts of the brain reported as “lit up” in a certain task that it made a veritable “salad of brain parts” out of which it was very difficult to figure out what’s going on. I swear that now that the fMRI field matured a bit and learned how to correct for multiple comparisons as well as to use some other fancy stats, the place of honor in the vegetable mix analogy has been relinquished to the ‘-omics’ studies. In other words, a big portion of the whole-genome or transcriptome studies became “salad” studies: too many things show up as statistically significant to make head or tail of it.
However, Murray et al. (2019) made a valiant – and successful – effort to figure out what those up- or down- regulated 61 gene transcripts in the immune system cells of 17 women falling in love actually mean. There’s quite a bit I am leaving out but, in a nutshell, love upregulated (that is “increased”) the expressions of genes involved in the innate immunity to viruses, presumably to facilitate sexual reproduction, the authors say.
The paper is well written and the authors graciously remind us that there are some limitations to the study. Nevertheless, this is another fine addition to the unbelievably fast growing body of knowledge regarding human body and behavior.
Shame that this research was done only with women. I would have loved to see how men’s immune systems respond to falling in love.
FYI: PMC6333523 [Available on 2020-02-01] means that the fulltext will be available for free to the public one year after the publication on the US governmental website PubMed (https://www.ncbi.nlm.nih.gov/pubmed/), no matter how much Elsevier will charge for it. Always, always, check the PMC library (https://www.ncbi.nlm.nih.gov/pmc/) on PubMed to see if a paper you saw in Nature or Elsevier is for free there because more often than you’d think it is.
PubMed = the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM), comprising of “more than 29 million citations for biomedical literature from MEDLINE, life science journals, and online books. Citations may include links to full-text content from PubMed Central and publisher web sites” .
PMC = “PubMed Central® (PMC) is a free fulltext archive of biomedical and life sciences journal literature at the U.S. National Institutes of Health’s National Library of Medicine (NIH/NLM)” with a whooping fulltext library of over 5 million papers and growing rapidly. Love PubMed!
Even the astronauts themselves said their DNA is different and they are no longer twins:
Alas, dear Scott & Mark Kelly, rest assured that despite these titles and their afferent stories, you two share the same DNA, still & forever. You are still identical twins until one of you changes species. Because that is what 7% alteration in human DNA means: you’re not human anymore.
So what gives?
Here is the root of all this misunderstanding:
“Another interesting finding concerned what some call the “space gene”, which was alluded to in 2017. Researchers now know that 93% of Scott’s genes returned to normal after landing. However, the remaining 7% point to possible longer term changes in genes related to his immune system, DNA repair, bone formation networks, hypoxia, and hypercapnia” (excerpt from NASA’s press release on the Twin Study on Jan 31, 2018, see reference).
If I wouldn’t know any better I too would think that yes, the genes were the ones who have changed, such is NASA’s verbiage. As a matter of actual fact, it is the gene expression which changed. Remember that DNA makes RNA and RNA makes protein? That’s the central dogma of molecular biology. A sequence of DNA that codes for a protein is called a gene. Those sequences do not change. But when to make a protein, how much protein, in what way, where to make this protein, which subtly different kinds of protein to make (alternative splicing), when not to make that protein, etc. is called the expression of that gene. And any of these aspects of gene expression are controlled or influenced by a whole variety of factors, some of these factors being environmental and as drastic as going to space or as insignificant as going to bed.
Now, I’d love, LOVE, I tell you, to jump to the throat of the media on this one so I can smugly show how superior my meager blog is when it comes to accuracy. But, I have to admit, this time is NASA’s fault. Although it is not NASA’s job to teach the central dogma of molecular biology to the media, they are, nonetheless, responsible for their own press releases. In this case, Monica Edwards and Laurie Abadie from NASA Human Research Strategic Communications did a booboo, in the words of the Sit-Com character Sheldon Cooper. Luckily for these two employees, the editor Timothy Gushanas published this little treat yesterday, right at the top of the press release:
“Editor’s note: NASA issued the following statement updating this article on March 15, 2018:
Mark and Scott Kelly are still identical twins; Scott’s DNA did not fundamentally change. What researchers did observe are changes in gene expression, which is how your body reacts to your environment. This likely is within the range for humans under stress, such as mountain climbing or SCUBA diving.
The change related to only 7 percent of the gene expression that changed during spaceflight that had not returned to preflight after six months on Earth. This change of gene expression is very minimal. We are at the beginning of our understanding of how spaceflight affects the molecular level of the human body. NASA and the other researchers collaborating on these studies expect to announce more comprehensive results on the twins studies this summer.”
But, seriously, NASA, what’s up with you guys keep screwing up molecular biology stuff?! Remember the arsenic-loving bacteria debacle? That paper is still not retracted and that press release is still up on your website! Ntz, ntz, for shame… NASA, you need better understanding of basic science and/or better #Scicomm in your press releases. Hiring? I’m offering!
P.S. Sometimes is a pain to be obsessed with accuracy (cue in smallest violins). For example, I cannot stop myself from adding something just to be scrupulously correct. Since the day they were conceived, identical twins’ DNAs are starting to diverge. There are all sorts of things that do change the actual sequence of DNA. DNA can be damaged by radiation (which you can get a lot of in space) or exposure to some chemicals. Other changes are simply due to random mutations. So no twins are exactly identical, but the changes are so minuscule, nowhere near 1%, let alone 7%, that it is safe to say that their DNA is identical.
P.P.S. With all this hullabaloo about the 7% DNA change everybody glossed over and even I forgot to mention the one finding that is truly weird: the elongation of telomeres for Scott, the one that was in space. Telomeres are interesting things, they are repetitive sequences of DNA (TTAGGG/AATCCC) at the end of the chromosomes that are repeated thousands of times. The telomere’s job is to protect the end of the chromosomes. You see, every time a cell divides the DNA copying machinery cannot copy the last bits of the chromosome (blame it on physics or chemistry, one of them things) and so some of it is lost. So evolution came up with a solution: telomeres, bits of unusable DNA that can be safely ignored and left behind. Or so we think at the moment. The length of telomeres has been implicated in some curious things, like cancer and life-span (immortality thoughts, anyone?). The most common finding is the shortening of telomeres associated with stress, but Scott’s were elongated, so that’s the first weird thing. I didn’t even know the telomeres can get elongated in living humans. But wait, there is more: NASA said that “the majority of those telomeres shortened within two days of Scott’s return to Earth”. Now that is the second oddest thing! If I would be NASA that’s where I would put my money on, not on the gene expression patterns.
EDIT 1 [Jan 17, 2018]: I promised four days ago that I will post this, while it was still hot, but my Internet was down, thanks to the only behemoth provider in USA. And rated the worst company in the Nation, too. You definitely know by now about whom I’m talking about. Grrrr… Anyway, here is the paper:
As promised, today’s paper talks about mRNA transfer between neurons.
Pastuzyn et al. (2018) looked at the gene Arc in neurons because they thought its Gag sequence looks suspiciously similar to some retroviruses. Could it be possible that it also behaves like a virus?
Arc is heavily involved in the immune system, is essential for the formation of long-term memories, and is involved in all sorts of diseases, like schizophrenia and Alzheimer’s, among other things.
Pastuzyn et al. (2018) is a relatively long and dense paper, albeit well written. So, I thought that this time, instead of giving you a summary of their research it would be better to give you the authors’ story directly in their own words written as subtitles in the Results section (bold letters – the authors words, normal font – mine). Warning: this is a much more jargon-dense blog post than my previous one on the same topic and, because it is so much material, I will not explain every term.
Fly and Tetrapod (us) Arc Genes Independently Originated from Distinct Lineages of Ty3/gypsy Retrotransposons, the phylogenomic analyses tell us, meaning the authors have done a lot of computer-assisted comparisons of similar forms of the gene in hundreds of species.
Arc Proteins Self-Assemble into Virus-like Capsids. Arc likes to oligomerize spontaneously (dimers and trimers). The oligomers resemble virus-like capsids, similar to HIV.
Arc Binds and Encapsulates RNA. Although it loves its own RNA about 10 times more than other RNAs, it’s a promiscuous protein (doesn’t care which RNA as long as it follows the rules of stoichiometry). Arc capsids encapsulate both the Arc protein (maybe other proteins too?), its mRNA, and whatever mRNA happened to be in the vicinity at the time of encapsulation. Arc capsids are able to protect the mRNA from RNAases.
Arc Capsid Assembly Requires RNA. If there is no RNA around, the capsids are few and poorly formed.
Arc Protein and Arc mRNA Are Released by Neurons in Extracellular Vesicles. Arc capsid packages Arc protein & Arc mRNA into extracellular vesicles (EV). The size of these EVs is < 100nm, putting them in the exosome category. This exosome, which the authors gave the unfortunate name of ACBAR (Arc Capsid Bearing Any RNA), is being expelled from cortical neurons in an activity-dependent manner. In other words, when neurons are stimulated, they release ACBARs.
Arc Mediates Intercellular Transfer of mRNA in Extracellular Vesicles. ACBARs dock to the host cell and then undergo clathrin-dependent endocytosis, meaning they expel their cargo in the host cell. The levels of Arc protein and Arc mRNA peaks in a host hippocampal cell in four hours from incubation. The ACBARs tend to congregate around donor cell’s dendrites.
Transferred Arc mRNA Can Undergo Activity-Dependent Translation. Activating the group 1 metabotropic glutamate receptor (mGluR1/5) by application of the agonist DHPG induces a significant increase of the amount of Arc protein in the host neurons.
This is a veritable tour de force paper. The Results section has 7 sub-sections, each with multiple experiments to dot every i and cross every t. I’m eyeballing about 40 experiments. It is true that there are 13 authors on the paper from different institutions – yeay for collaboration! – but c’mon! Is this what you need to get in Cell these days? Apparently so. Don’t get me wrong, this is an outstanding paper. But in the end it is still only one paper, which means only one first author. The rest are there for the ride because for a tenure track application nobody cares about your papers in CNS (Cell, Nature, Science = The Central Nervous System of the scientific community, har, har) if you’re not the first author. It looks like the increasing amount of work you need to be published in top tier journals these days is becoming a pet peeve of mine as I keep mentioning it (for example, here).
My pet peeves aside, Pastuzyn et al. (2018) is an excellent paper that opens interesting practical (drug delivery) and theoretical (biological repurpose of ancient invaders) gates. Kudos!
P.S. I said that ACBAR is an unfortunate acronym because I don’t know about you but I for one wouldn’t want my discovery to be linked either with a religion or with terrorist cries, even if that link is done only by a small fraction of the population. Although I can totally see the naming-by-committee going: “ACBAR! Our exosome is the greatest! Yeay!” or “Arc Acbar! Our Arc is the greatest. Double yeay!”. On a second thought, it’s kindda nerdy geeky neat. I still wouldn’t have done it though…
By Neuronicus, 14 January 2018
EDIT 2 [Jan 22, 2018]: There is another paper that discovered that Arc forms capsids that encapsulate RNA and then shuttles it across the neuromuscular junction in Drosophila (fly). To their credit, Cell published both these papers back-to-back so no researcher gets scooped of their discovery. From what I can see, the discovery really happened simultaneously, so I modified my infopic to reflect that (both papers were submitted in January 2017, received in revised version on August 15, 2017 and published in the same issue on January 11, 2018). Here is the reference to the other article:
EDIT 3 [Jan 29, 2018]: Dr. Shepherd, the last author of the paper I featured, was kind enough to answer a few of my questions about the implications of his and his team’s findings, answers which you will find here.
I’m interrupting the series on cognitive biases (unskilled-and-unaware, superiority illusion, and depressive realism) to tell you that I admit it, I’m old. -Ish. Well, ok, I’m not that old. But this following paper made me feel that old. Because it invalidates some stuff I thought I knew about molecular cell biology. Mind totally blown.
It all started with a paper freshly published two days ago and that I’ll cover tomorrow. It’s about what the title says: mRNA can travel between cells packaged nicely in vesicles and once in a target cell can be made into protein there. I’ll explain – briefly! – why this is such a mind-blowing thing.
We’ll start with the central dogma of molecular biology (specialists, please bear with me): the DNA is transcribed into RNA and the RNA is translated into protein (see Fig. 1). It is an oversimplification of the complexity of information flow in a biological system, but it’ll do for our purposes.
DNA needs to be transcribed into RNA because RNA is a much more flexible molecule and thus can do many things. So RNA is the traveling mule between DNA and the place where its information becomes protein, i.e. ribosome. Hence the name mRNA. Just kidding; m stands for messenger RNA (not that I will ever be able to call that ever again: muleRNA is stuck in my brain now).
There are many kinds of RNA: some don’t even get out of the nucleus, some are chopped and re-glued (alternative splicing), some decide which bits of DNA (genes) are to be expressed, some are busy housekeepers and so on. Once an RNA has finished its business it is degraded in many inventive ways. It cannot leave the cell because it cannot cross the cell membrane. And that was that. Or so I’ve been taught.
Exceptions from the above were viruses whose ways of going from cell to cell are very clever. A virus is a stretch of nucleic acids (DNA and/or RNA) and some proteins encapsulated in a blob (capsid). Not a cell!
In the ’90s several groups were looking at some blobs (yes, most stuff in biology can be defined by the all-encompassing and enlightening term of ‘blob’) that cells spew out every now and then. These were termed extracellular vesicles (EV) for obvious reasons. Turned out that many kinds of cells were doing it and on a much more regular basis than previously thought. The contents of these EVs varied quite a bit, based on the type of cells studied. Proteins, mostly, and maybe some cytoplasmic debris. In the ’80s it was thought that this was one way for a cell to get rid of trash. But in 1982, Stegmayr & Ronquist showed that prostate cells release some EVs that result in sperm cell motility increase (Raposo & Stoorvogel, 2013) so, clearly, the EVs were more than trash. Soon it became evident that EVs were another way of cell-to-cell communication. (Note to self: the first time intercellular communication by EVs was demonstrated was in 1982, Stegmayr & Ronquist. Maybe I’ll dig out the paper to cover it sometime).
So. In 2005, Baj-Krzyworzeka et al. (2006) looked at some human cancer cells to see what they spew out and for what purpose. They saw that the cancer cells were transferring some of the tumor proteins packaged in EVs to monocytes. For devious purposes, probably. And then they made to what it looks to me like a serious leap in reasoning: since the EVs contain tumor proteins, why wouldn’t they also contain the mRNA for those proteins? My first answer to that would have been: “because it would be rapidly degraded”. And I would have been wrong. To my credit, if the experiment wouldn’t take up too many resources I still would have done it, especially if I would have some random primers lying around the lab. Luckily for the world, I was not in charge with this particular experiment and Baj-Krzyworzeka et al. (2005) proceeded with a real-time PCR (polymerase chain reaction) which showed them that the EVs released by the tumor cells also contained mRNA.
Now the 1 million dollar, stare-in-your-face question was: is this mRNA functional? Meaning, once delivered to the host cell, would it be translated into protein?
Six months later the group answered it. Ratajcza et al. (2006) used embryonic stem cells as the donor cells and hematopoietic progenitor cells as host cells. First, they found out that if you let the donors spit EVs at the hosts, the hosts are faring much better (better survival, upregulated good genes, phosphorylated MAPK to induce proliferation etc.). Next, they looked at the contents of EVs and found out that they contained proteins and mRNA that promote those good things (Wnt-3 protein, mRNA for transcription factors etc.). Next, to make sure that the host cells don’t show this enrichment all of a sudden out of the goodness of their little pluripotent hearts but is instead due to the mRNA from the donor cells, the authors looked at the expression of one of the transcription factors (Oct-4) in the hosts. They used as host a cell line (SKL) that does not express the pluripotent marker Oct-4. So if the hosts express this protein, it must have come only from outside. Lo and behold, they did. This means that the mRNA carried by the EVs is functional (Fig. 2).
What bugs me is that these papers came out in a period where I was doing some heavy reading. How did I miss this?! Probably because they were published in cancer journals, not my field. But this is big enough you’d think others would mention it. (If you’re a recurrent reader of my blog, by now you should be familiarized with my stream-of-consciousness writing and my admittedly sometimes annoying in-parenthesis-meta-cognitions :D). So how did I miss this? How many more great discoveries have I missed? Am I the only one to discover such fundamental gaps in my knowledge? And thus the imposter syndrome takes root.
Just kidding, I don’t have the imposter syndrome. If anything, I got a superiority illusion complex. And I am absolutely sure that many, many scientists read things they consider fundamental to their way of thinking about the world all the time and wonder what other truly great discoveries are out there already that they missed.
Frankly, I should probably be grateful to this blog – and my friend GT who made me do it – because without nosing outside my field in search of material for it I would have probably remained ignorant of this awesome discovery. So, even if this is a decade old discovery for you, for me is one day old and I am a bit giddy about it.
This is a big deal because of the theoretical implications: a cell’s transcriptome (all the mRNA expressed in a cell) varies not only due to its needs, activity, and experiences, but also due to its neighbors’! A cell is, more or less, its transcriptome. Soooo… if we can change that at will, does that means we can change the type or function of the cell too? There are so many questions that such a discovery raises! And possibilities.
This is also a big deal because it opens up not a new therapy, or a new therapy direction, or a new drug class, but a new DELIVERY METHOD, the Holy Grail of Pharmacopeia. You just put your drug in one of these vesicles and let nature take its course. Of course, there are all sorts of roadblocks to overcome, like specificity, toxicity, etc. Looks like some are already conquered as there are several clinical trials out there that take advantage of this mechanism and I bet there will be more.
Stop by tomorrow for a freshly published paper on this mechanism in neurons.
Nathan Lo is an evolutionary biologist interested in creepy crawlies, i.e. arthropods. Well, he’s Australian, so I guess that comes with the territory (see what I did there?). While postdoc’ing, he and his colleagues published a paper (Sassera et al., 2006) that would seem boring for anybody without an interest in taxonomy, a truly under-appreciated field.
The paper describes a bacterium that is a parasite for the mitochondria of a tick species called Ixodes ricinus, the nasty bugger responsible for Lyme disease. The authors obtained a female tick from Berlin, Germany and let it feed on a hamster until it laid eggs. By using genetic sequencing (you can use kits these days to extract the DNA, do PCR, gels and cloning, pretty much everything), electron microscopy (real powerful microscopes) and phylogenetic analysis (using computer softwares to see how closely related some species are) the authors came to the conclusion that this parasite they were working on is a new species. So they named it. And below is the full account of the naming, from the horse’s mouth, as it were:
“In accordance with the guidelines of the International Committee of Systematic Bacteriology, unculturable bacteria should be classified as Candidatus (Murray & Stackebrandt, 1995). Thus we propose the name ‘Candidatus Midichloria mitochondrii’ for the novel bacterium. The genus name Midichloria (mi.di.chlo′ria. N.L. fem. n.) is derived from the midichlorians, organisms within the fictional Star Wars universe. Midichlorians are microscopic symbionts that reside within the cells of living things and ‘‘communicate with the Force’’. Star Wars creator George Lucas stated that the idea of the midichlorians is based on endosymbiotic theory. The word ‘midichlorian’ appears to be a blend of the words mitochondrion and chloroplast. The specific epithet, mitochondrii (mi.to′chon.drii. N.L. n. mitochondrium -i a mitochondrion; N.L. gen. n. mitochondrii of a mitochondrion), refers to the unique intramitochondrial lifestyle of this bacterium. ‘Candidatus M. mitochondrii’ belongs to the phylum Proteobacteria, to the class Alphaproteobacteria and to the order Rickettsiales. ‘Candidatus M. mitochondrii’ is assigned on the basis of the 16S rRNA (AJ566640) and gyrB gene sequences (AM159536)” (p. 2539).
George Lucas gave his blessing to the Christening (of course he did).
Acknowledgements: Thanks go to Ms. BBD who prevented me from making a fool of myself (this time!) on the social media by pointing out to me that midichloria are real and that they are a mitochondrial parasite.
Aging is being quite extensively studied these days and here is another advance in the field. Pardo et al. (2017) looked at what happens in the hippocampus of 2-months old (young) and 28-months old (old) female rats. Hippocampus is a seahorse shaped structure no more than 7 cm in length and 4 g in weight situated at the level of your temples, deep in the brain, and absolutely necessary for memory.
First the researchers tested the rats in a classical maze test (Barnes maze) designed to assess their spatial memory performance. Not surprisingly, the old performed worse than the young.
Then, they dissected the hippocampi and looked at neurogenesis and they saw that the young rats had more newborn neurons than the old. Also, the old rats had more reactive microglia, a sign of inflammation. Microglia are small cells in the brain that are not neurons but serve very important functions.
After that, the researchers looked at the hippocampal transcriptome, meaning they looked at what proteins are being expressed there (I know, transcription is not translation, but the general assumption of transcriptome studies is that the amount of protein X corresponds to the amount of the RNA X). They found 210 genes that were differentially expressed in the old, 81 were upregulated and 129 were downregulated. Most of these genes are to be found in human too, 170 to be exact.
But after looking at male versus female data, at human and mouse aging data, the authors came up with 11 genes that are de-regulated (7 up- and 4 down-) in the aging hippocampus, regardless of species or gender. These genes are involved in the immune response to inflammation. More detailed, immune system activates microglia, which stays activated and this “prolonged microglial activation leads to the release of pro-inflammatory cytokines that exacerbate neuroinflammation, contributing to neuronal loss and impairment of cognitive function” (p. 17). Moreover, these 11 genes have been associated with neurodegenerative diseases and brain cancers.
These are the 11 genes: C3 (up), Cd74 (up), Cd4 (up), Gpr183 (up), Clec7a (up), Gpr34 (down), Gapt (down), Itgam (down), Itgb2 (up), Tyrobp (up), Pld4 (down).”Up” and “down” indicate the direction of deregulation: upregulation or downregulation.
I wish the above sentence was as explicitly stated in the paper as I wrote it so I don’t have to comb through their supplemental Excel files to figure it out. Other than that, good paper, good work. Gets us closer to unraveling and maybe undoing some of the burdens of aging, because, as the actress Bette Davis said, “growing old isn’t for the sissies”.
A couple of days ago, on December 1st, was the National Day of Romania, a small country in the South-East of Europe. In its honor, I dug out a paper that shows that some of the earliest known modern humans in Europe were also… dug out there.
Trinkaus et al. (2003) investigated the mandible of an individual found in 2002 by a Romanian speological expedition in Peștera cu Oase (the Cave with Bones), one of the caves in the SouthWest of the country, not far from where Danube meets the Carpathians.
First the authors did a lot of very fine measurement of various aspects of the jaw, including the five teeth, and then compared them with those found in other early humans and Neanderthals. The morphological features place the Oase 1 individual as an early modern human with some Neanderthal features. The accelerator mass spectrometry radiocarbon (14C) direct dating makes him the oldest early modern human discovered to that date in Europe; he’s 34,000–36,000 year old. I’m assuming is a he for no particular reason; the paper doesn’t specify anywhere whether they know the jaw owner’s gender and age. A later paper (Fu et al., 2015) says Oase 1 is even older: 37,000–42,000-year-old.
After this paper it seemed to be a race to see what country can boast to have the oldest human remains on its territory. Italy and UK successfully reassessed their own previous findings thusly: UK has a human maxilla that was incorrectly dated in 1989 but new dating makes it 44,200–39,000 year old, carefully titling their paper “The earliest evidence for anatomically modern humans in northwestern Europe” (Higham et al., 2011) while Italy’s remains that they thought for decades to be Neanderthal turned out to be 45,000-43,000 years old humans, making “the Cavallo human remains […] the oldest known European anatomically modern humans” (Benmazzi et al., 2011).
I wonder what prompted the sudden rush in reassessing the old untouched-for-decades fossils… Probably good old fashioned national pride. Fair enough. Surely it cannot have anything to do with the disdain publicly expressed by some Western Europe towards Eastern Europe, can it? Surely scientists are more open minded than some petty xenophobes, right?
Well, the above thought wouldn’t have even crossed my mind, nor would I have noticed that the Romanians’ discovery has been published in PNAS and the others in Nature, had it not been for the Fu et al. (2015) paper, also published in Nature. This paper does a genetic analysis of the Oase 1 individual and through some statistical inferences that I will not pretend to fully understand they arrive to two conclusions. First, Oase 1 had a “Neanderthal ancestor as recently as four to six generations back”. OK. Proof of interbreeding, nothing new here. But the second conclusion I will quote in full: “However, the Oase individual does not share more alleles with later Europeans than with East Asians, suggesting that the Oase population did not contribute substantially to later humans in Europe.”
Now you don’t need to know much about statistics or about basic logic either to know that from 1 (one) instance alone you cannot generalize to a whole population. That particular individual from the Oase population hasn’t contributed to later humans in Europe, NOT the entire population. Of course it is possible that that is the case, but you cannot scientifically draw that conclusion from one instance alone! This is in the abstract, so everybody can see this, but I got access to the whole paper, which I have read in the hopes against hope that maybe I’m missing something. Nope. The authors did not investigate any additional DNA and they reiterate that the Oase population did not contribute to modern-day Europeans. So it’s not a type-O. From the many questions that are crowding to get out like ‘How did it get past reviewers?’, ‘Why was it published in Nature (interesting paper, but not that interesting, we knew about interbreeding so what makes it so new and exciting)?’, the one that begs to be asked the most is: ‘Why would they say this, when stating the same thing about the Oase 1 individual instead about the Oase population wouldn’t have diminished their paper in any way?’ .
I must admit that I am getting a little paranoid in my older age. But with all the hate that seems to come out and about these days EVERYWHERE towards everything that is “not like me” and “I don’t want it to be like me”, one cannot but wonder… Who knows, maybe it is really just as simple as an overlooked mistake or some harmless national pride so all is good and life goes on, especially since the authors of all four papers discussed above are from various countries and institutions all across the Globe. Should that be the case, I offer my general apologies for suspecting darker motives behind these papers, but I’m not holding my breath.
Memory processes like formation, maintenance and consolidation have been the subjects of extensive research and, as a result, we know quite a bit about them. And just when we thought that we are getting a pretty clear picture of the memory tableau and all that is left is a little bit of dusting around the edges and getting rid of the pink elephant in the middle of the room, here comes a new player that muddies the waters again.
DNA methylation. The attaching of a methyl group (CH3) to the DNA’s cytosine by a DNA methyltransferase (Dnmt) was considered until very recently a process reserved for the immature cells in helping them meet their final fate. In other words, DNA methylation plays a role in cell differentiation by suppressing gene expression. It has other roles in X-chromosome inactivation and cancer, but it was not suspected to play a role in memory until this decade.
Oliveira (2016) gives us a nice review of the role(s) of DNA methylation in memory formation and maintenance. First, we encounter the pharmacological studies that found that injecting Dnmt inhibitors in various parts of the brain in various species disrupted memory formation or maintenance. Next, we see the genetic studies, where mice Dnmt knock-downs and knock-outs also show impaired memory formation and maintenance. Finally, knowing which genes’ transcription is essential for memory, the researcher takes us through several papers that examine the DNA de novo methylation and demethylation of these genes in response to learning events and its role in alternative splicing.
Based on these here available data, the author proposes that activity induced DNA methylation serves two roles in memory: to “on the one hand, generate a primed and more permissive epigenome state that could facilitate future transcriptional responses and on the other hand, directly regulate the expression of genes that set the strength of the neuronal network connectivity, this way altering the probability of reactivation of the same network” (p. 590).
Here you go; another morsel of actual science brought to your fingertips by yours truly.
Just like in the case of schizophrenia, hundreds of genes have been associated with autistic spectrum disorders (ASDs). Here is another candidate.
Féron et al. (2016) reasoned that most of the info we have about the genes that are behaving badly in ASDs comes from studies that used adult cells. Because ASDs are present before or very shortly after birth, they figured that looking for genetic abnormalities in cells that are at the very early stage of ontogenesis might prove to be enlightening. Those cells are stem cells. Of the pluripotent kind. FYI, based on what they can become (a.k.a how potent they are), the stem cells are divided into omipotent, pluripotent, multipotent, oligopotent, and unipotent. So the pluripotents are very ‘potent’ indeed, having the potential of producing a perfect person.
Tongue-twisters aside, the authors’ approach is sensible, albeit non-hypothesis driven. Which means they hadn’t had anything specific in mind when they had started looking for differences in gene expression between the olfactory nasal cells obtained from 11 adult ASDs sufferers and 11 age-matched normal controls. Luckily for them, as transcriptome studies have a tendency to be difficult to replicate, they found the anomalies in the expression of genes that have been already associated with ASD. But, they also found a new one, the MOCOS (MOlybdenum COfactor Sulfurase) gene, which was poorly expressed in ASDs (downregulated, in genetic speak). The enzyme is MOCOS (am I the only one who thinks that MOCOS isolated from nasal cells is too similar to mucus? is the acronym actually a backronym?).
The enzyme is not known to play any role in the nervous system. Therefore, the researchers looked to see where the gene is expressed. Its enzyme could be found all over the brain of both mouse and human. Also, in the intestine, kidneys, and liver. So not much help there.
Next, the authors deleted this gene in a worm, Caenorhabditis elegans, and they found out that the worm’s cells have issues in dealing with oxidative stress (e.g. the toxic effects of free radicals). In addition, their neurons had abnormal synaptic transmission due to problems with vesicular packaging.
Then they managed – with great difficulty – to produce human induced pluripotent cells (iPSCs) in a Petri dish in which the gene MOCOS was partially knocked down. ‘Partially’, because the ‘totally’ did not survive. Which tells us that MOCOS is necessary for survival of iPSCs. The mutant cells had less synaptic buttons than the normal cells, meaning they formed less synapses.
The study, besides identifying a new candidate for diagnosis and treatment, offers some potential explanations for some beguiling data that other studies have brought forth, like the fact that all sorts of neurotransmitter systems seem to be impaired in ADSs, all sorts of brain regions, making very hard to grab the tiger by the tail if the tiger is sprouting a new tail when you look at it, just like the Hydra’s heads. But, discovering a molecule that is involved in an ubiquitous process like synapse formation may provide a way to leave the tiger’s tail(s) alone and focus on the teeth. In the authors’ words:
“As a molecule involved in the formation of dense core vesicles and, further down, neurotransmitter secretion, MOCOS seems to act on the container rather than the content, on the vehicle rather than one of the transported components” (p. 1123).
The knowledge uncovered by this paper makes a very good piece of the ASDs puzzle. Maybe not a corner, but a good edge. Alright, even if it’s not an edge, at least it’s a crucial piece full of details, not one of those sky pieces.
Not all people with the same bad genetic makeup that predisposes them to a particular disease go and develop that disease or, at any rate, not with the same severity and prognosis. The question is why? After all, they have the same genes…
Here comes a study that answers that very important question. Eloy et al. (2016) looked at the most common pediatric eye cancer (1 in 15,000) called retinoblastoma (Rb). In the hereditary form of this cancer, the disease occurs if the child carries mutant (i.e. bad) copies of the RB1 tumour suppressor gene located on chromosome 13 (13q14). These copies, called alleles, are inherited by the child from the mother or from the father. But some children with this genetic disadvantage do not develop Rb. They should, so why not?
The authors studied 57 families with Rb history. They took blood and tumour samples from the participants and then did a bunch of genetic tests: DNA, RNA, and methylation analyses.
They found out that when the RB1 gene is inherited from the mother, the child has only 9.7% chances of developing Rb, but when the gene is inherited from the father the child has only 67.5% chances of developing Rb.
The mechanism for this different outcomes may reside in the differential methylation of the gene. Methylation is a chemical process that suppresses the expression of a gene, meaning that less protein is produced from that gene. The maternal gene had less methylation, meaning that more protein was produced, which was able to offer some protection against the cancer. Seems counter-intuitive, you’d think less bad protein is a good thing, but there is a long and complicated explanation for that, which, in a very simplified form, posits that other events influence the function of the resultant protein.
Again, epigenetics seem to offer explanations for pesky genetic inheritance questions. Epigenetic processes, like DNA methylation, are modalities through which traits can be inherited that are not coded in the DNA itself.
Reference: Eloy P, Dehainault C, Sefta M, Aerts I, Doz F, Cassoux N, Lumbroso le Rouic L, Stoppa-Lyonnet D, Radvanyi F, Millot GA, Gauthier-Villars M, & Houdayer C (29 Feb 2016). A Parent-of-Origin Effect Impacts the Phenotype in Low Penetrance Retinoblastoma Families Segregating the c.1981C>T/p.Arg661Trp Mutation of RB1. PLoS Genetics, 12(2):e1005888. eCollection 2016. PMID: 26925970, PMCID: PMC4771840, DOI: 10.1371/journal.pgen.1005888. ARTICLE | FREE FULLTEXT PDF
Few people know that Pokemon refers not only to a game, but also to a gene. An oncogene, to be precise, with a rather strange story.
An oncogene is a gene that promotes cancer (from oncology). Conventionally, a gene name is written in lowercase italicized letters (pokemon), whereas the protein the gene makes is not italicized (POKEMON, Pokemon, or pokemon, depending on the species). Maeda et al. (2005) first established in a Petri dish that the Pokemon is required for the growth of malignant tumors. Then, through a series of classic molecular biology experiments, the scientists found out how exactly Pokemon acts to accomplish this (by suppressing the expression of anti-cancer genes). Next, they engineered mice with pokemon overexpressed and saw that the mice with a lot of Pokemon “developed aggressive tumours” (p. 282). Then the authors checked how is this gene behaving in human cancers and found out that “Pokemon is expressed at very high levels in a subset of human lymphomas” (p. 284).
And here is how the gene got its name, according to Pier Paolo Pandolfi, the leader of the research group. Bear with me because it’s complicated. [*Takes deep breath*]: PO in POK stands for POZ domain (poxvirus and zinc finger) and K in POK stands for Krüppel (zinc finger transcription factor) whereas EMON stands for erythroid myeloid ontogenic factor. POK-EMON. Simple, eh? Phew…
Truth be told, Pandolfi first named the gene pokemon at a conference in 2001 (Simonite, 2005). Then the name has been used by researchers at various scientific meetings and poster presentations.
But when the Maeda et al. paper was published in Nature in 2005 which discovered the mechanism through which the gene promotes cancer, a lot of people, scientists and journalists alike, in an attempt at humour, flooded the internet with eye-catching titles along the lines of “Pokemon causes cancer”, “Pokemon kills you” and the like. I mean, even the researchers themselves in the abstract of the paper state: “Pokemon is aberrantly overexpressed in human cancers”. In response, The Pokémon Company threatened to sue for trademark copyright infringement because they didn’t want the game to be associated with cancer, like the gene is, even if the researches said the name is an acronym (maybe they meant backronym?). In the end, the researchers changed the name of the pokemon gene to the far less enticing zbtb7.
As the question mark in the title of the post suggests, the pokeman gene may not be entirely dead yet because there are stubborn scientists that still use the name pokemon and not zbtb7. I hope they have the cash to take on Nintendo if they decide to sue after all.
Too bad the zbtb7 (a.k.a. pokemon) gene was not a beneficial gene… Because another group of researchers named their new-found gene in 2008 pikachurin and so far, Nintendo din not make any waves… That is, probably, because Pikachurin is a protein in the eye retina that is required for proper vision by speeding the electric signals. Zip zip zip Pikachurin goes…
Maeda T, Hobbs RM, Merghoub T, Guernah I, Zelent A, Cordon-Cardo C, Teruya-Feldstein J, & Pandolfi PP (20 Jan 2005). Role of the proto-oncogene Pokemon in cellular transformation and ARF repression. Nature, 433(7023):278-85. PMID: 15662416, DOI: 10.1038/nature03203. ARTICLE | FULLTEXT PDF at Univ. Barcelona
Simonite T (15 Dec 2005). Pokémon blocks gene name. Nature, 438(7070):897. PMID: 16355177, DOI: 10.1038/438897a. ARTICLE
Nothing short of an autism cure is promised by this hot new research paper.
Among many thousands of proteins that a neuron needs to make in order to function properly there is one called SHANK3 made from the gene shank3. (Note the customary writing: by consensus, a gene’s name is written using small caps and italicized, whereas the protein’s name that results from that gene expression is written with caps).
This protein is important for the correct assembly of synapses and previous work has shown that if you delete its gene in mice they show autistic-like behavior. Similarly, some people with autism, but by far not all, have a deletion on Chromosome 22, where the protein’s gene is located.
The straightforward approach would be to restore the protein production into the adult autistic mouse and see what happens. Well, one problem with that is keeping the concentration of the protein at the optimum level, because if the mouse makes too much of it, then the mouse develops ADHD and bipolar.
So the researchers developed a really neat genetic model in which they managed to turn on and off the shank3 gene at will by giving the mouse a drug called tamoxifen (don’t take this drug for autism! Beside the fact that is not going to work because you’re not a genetically engineered mouse with a Cre-dependent genetic switch on your shank3, it is also very toxic and used only in some form of cancers when is believed that the benefits outweigh the horrible side effects).
In young adult mice, the turning on of the gene resulted in normalization of synapses in the striatum, a brain region heavily involved in autistic behaviors. The synapses were comparable to normal synapses in some aspects (from the looks, i.e. postsynaptic density scaffolding, to the works, i.e. electrophysiological properties) and even more so in others (more dendritic spines than normal, meaning more synapses, presumably). This molecular repair has been mirrored by some behavioral rescue: although these mice still had more anxiety and more coordination problems than the control mice, their social aversion and repetitive behaviors disappeared. And the really really cool part of all this is that this reversal of autistic behaviors was done in ADULT mice.
Now, when the researchers turned the gene on in 20 days old mice (which is, roughly, the equivalent of the entering the toddling stage in humans), all four behaviors were rescued: social aversion, repetitive, coordination, and anxiety. Which tells us two things: first, the younger you intervene, the more improvements you get and, second and equally important, in adult, while some circuits seem to be irreversibly developed in a certain way, some other neural pathways are still plastic enough as to be amenable to change.
Awesome, awesome, awesome. Even if only a very small portion of people with autism have this genetic problem (about 1%), even if autism spectrum disorders encompass such a variety of behavioral abnormalities, this research may spark hope for a whole range of targeted gene therapies.
Reference: Mei Y, Monteiro P, Zhou Y, Kim JA, Gao X, Fu Z, Feng G. (Epub 17 Feb 2016). Adult restoration of Shank3 expression rescues selective autistic-like phenotypes. Nature. doi: 10.1038/nature16971. Article | MIT press release
Opiates like morphine and heroin can be made at home by anybody with a home beer-brewing kit and the right strain of yeast. In 2015, two published papers and a Ph.D. dissertation described the relatively easy way to convince yeast to make morphine from sugar (the links are provided in the Reference paper). That is the bad news.
The good news is that scientists have been policing themselves (well, most of them, anyway) long before regulations are put in place to deal with technological advancements by, for example, limiting access to the laboratory, keeping things under lock and key, publishing incomplete data, and generally being very careful with what they’re doing.
Complementing this behavior, an article published by Oye et al. (2015) outlines other measures that can be put in place so that this new piece of knowledge doesn’t increase the accessibility to opiates, thereby increasing the number of addicts, which is estimated to more than 16 million people worldwide. For example, researchers can make the morphine-producing yeast dependent on unusual nutrients or engineer the existing strain to produce less-marketable varieties of opiates or prohibit the access to made-to-order DNA sequences for this type of yeast and so on.
You may very well ask “Why did the scientists made this kind of yeast anyway?”. Because some medicines are either very expensive or laborious to produce by the pharmaceutical companies, the researchers have sought a method to make these drugs more easily and cheaply by engineering bacteria, fungi, or plants to produce them for us. Insulin is a good example of an expensive and hard-to-get-by drug that we managed to engineer yeast strains to produce it for us. And opiates are still the best analgesics out there.
Reference: Oye KA, Lawson JC, & Bubela T (21 May 2015). Drugs: Regulate ‘home-brew’ opiates. Nature, 521(7552):281-3. doi: 10.1038/521281a. Article | FREE Fulltext PDF
Over 250 years ago today, on 31 December 1759, Arthur Guinness started brewing one of the most loved adult drinks today, the Guinness beer.
As with all food and drink products, beer can be also suffer spoiling due to various bacteria. The genomes of two of these culprits – Megasphaera cerevisiae PAT 1T and Lactobacillus brevis BSO 464 – have been sequenced in 2015 by two different groups.
Funny thing though: the papers that announce the completion of the genome sequencing (see bellow References) do not talk abut the significance of their discovery. The usual template for a biology paper (or as a matter of fact any science paper) is:
Introduction: x is important because y,
Methods and Results: here is what we did to understand x,
Conclusion: now we can better tackle y.
Not these papers, which basically say, in less than a page: “This bacterium spoils beer; here is its genome. You’re welcome!”
Well played, geneticists, well played… And we are, indeed, grateful. Oh, yes, we are…
1. Kutumbaka KK, Pasmowitz J, Mategko J, Reyes D, Friedrich A, Han S, Martens-Habbena W, Neal-McKinney J, Janagama HK, & Nadala C, Samadpour M (10 Sep 2015). Draft Genome Sequence of the Beer Spoilage Bacterium Megasphaera cerevisiae Strain PAT 1T. Genome Announcements, 3(5). pii: e01045-15. doi: 10.1128/genomeA.01045-15. Article | FREE Fulltext PDF | FREE GENOME
2. Bergsveinson J, Pittet V, Ewen E, Baecker N, & Ziola B (3 Dec 2015). Genome Sequence of Rapid Beer-Spoiling Isolate Lactobacillus brevis BSO 464. Genome Announcements, 3(6). pii: e01411-15. doi: 10.1128/genomeA.01411-15. Article | FREE Fulltext PDF | FREE GENOME
Although they are very rare, werewolves do exist. And now the qualifier: werewolves as in people with excessive hair growth all over the body and not the more familiar kind that changes into a wolf every time there is a new moon. The condition is called hypertrichosis and its various forms have been associated with distinct genetic abnormalities.
In a previous report, DeStefano et al. (2013) identified the genetic locus of the X-linked congenital generalized hypertrichosis (CGH), which is a 19-Mb region on Xq24-27 that spans about 82 genes, resulting mainly from insertions from chromosomes 4 and 5. Now, they wanted to see what is the responsible mechanism for the disease. First, they looked at the hair follicles of a man afflicted with CGH that has hair almost all over his body and noticed some structural abnormalities. Then, they analyzed the expression of several genes from the affected region of the chromosome in this man and others with CGH and they observed that only the levels of the Fibroblast Growth Factor 13 (FGF13), a protein found in hair follicles, are much lower in CGH. Then they did some more experiments to establish the crucial role of FGF13 in regulating the follicle growth.
An interesting find of the study is that, at least in the case of hypertrichosis, is not the content of the genomic sequences that were added to chromosome X that matter, but their presence, affecting a gene that is located 1.2 Mb away from the insertion.
Reference: DeStefano GM, Fantauzzo KA, Petukhova L, Kurban M, Tadin-Strapps M, Levy B, Warburton D, Cirulli ET, Han Y, Sun X, Shen Y, Shirazi M, Jobanputra V, Cepeda-Valdes R, Cesar Salas-Alanis J, & Christiano AM ( 7 May 2013, Epub 19 Apr 2013). Position effect on FGF13 associated with X-linked congenital generalized hypertrichosis. Proceedings of the National Academy of Sciences of the U.S.A., 110(19):7790-5. doi: 10.1073/pnas.1216412110.Article|FREE FULLTEXT PDF
A telomere is a genetic sequence (TTAGGG for vertebrates) that is repeated at the end of the chromosomes many thousands of times and serves as a protective cap that keep the chromosome stable and protected from degradation. Every time a cell divides, the telomere length shortens. This shortening had been linked to aging or, in other words, the shorter the telomere, the shorter the lifespan. But in some cells, like the germ cells, stem cells, or malignant cells, there is an enzyme that adds the telomere sequence back on the chromosome after the cell has divided.
The telomerase has been discovered in 1984 by Carol W. Greider and Elizabeth Blackburn in a protozoan (i.e. a unicellular eukaryotic organism) commonly found in puddles and ponds called Tetrahymena. I wanted to give a synopsis of their experiments, but who better to explain the work then the authors themselves? Here is a video of Dr. Blackburn herself explaining step by step in 20 minutes the rationale and the findings of the experiments for which she and Carol W. Greider received the Nobel Prize in Physiology or Medicine in 2009. If 20 minutes of genetics just whet your appetite, perhaps you will want to watch the extended 3 hours lecture (Part 1, Part 2, Part 3).
Reference: Greider, C.W. & Blackburn, E.H. (December 1985). Identification of a specific telomere terminal transferase activity in Tetrahymena extracts. Cell. Vol. 43, Issue 2, Part 1, pg. 405-413. DOI: 10.1016/0092-8674(85)90170-9.Article|FREE FULLTEXT PDF
Pain insensitivity has been introduced to the larger public via TV shows from the medical drama genre (House, ER, Gray’s Anatomy, and the like). It seems fascinating to explore the consequences of a life without pain. But these shows do not feature, quite understandably, the gruesome aspects of this rare and incredibly life threatening disorder. For example, did you know that sometimes the baby teeth of these people are extracted before they reach 1 year old so they stop biting their fingers and tongues off? Or that a good portion of the people born with pain insensitivity die before reaching adulthood?
Nahorski et al. (2015) discover a new disorder that includes pain insensitivity, along with touch insensitivity, cognitive delay, and severe other disabilities. They investigate a family where the husband and wife are also double first cousins and the authors had access to the children’s DNA. Extensive analysis reveal a mutation on the gene CLTCL1 that encodes for the protein CHC22. This protein is required for the normal development of the cells that fell pain and touch, among other things.
Other genetic studies into various syndromes of painlessness have produced data that found the basis for new analgesics. Therefore, the hope with this study is that CHC22 may become a target for a future painkiller discovery.
But, on the side note, what made me feature this paper is more than the potential of new analgesics is in the last paragraph of the paper: “rodents have lost CLTCL1 and thus must have alternative pathway(s) to compensate for this. Thus, some pain research results generated in these animals may not be applicable to man” (p. 2159).
The overwhelming majority of pain research and painkiller search is done in rodents. So…. how much from what we know from rodents and translate to humans doesn’t really apply? Worse yet, how many false negatives did we discard already? What if the panaceum universalis has been tried already in mice and nobody knows what it is because it didn’t work? It’s not like there is a database of negative results published somewhere where we can all ferret and, in the light of these new discoveries, give those loser chemicals another try…. Food for thought and yet ANOTHER reason why all research should be published, not just the positive results.
Reference: Nahorski MS, Al-Gazali L, Hertecant J, Owen DJ, Borner GH, Chen YC, Benn CL, Carvalho OP, Shaikh SS, Phelan A, Robinson MS, Royle SJ, & Woods CG. (August 2015, Epub 11 Jun 2015). A novel disorder reveals clathrin heavy chain-22 is essential for human pain and touch development. Brain, 138(Pt 8):2147-2160. doi: 10.1093/brain/awv149. Article | FREE FULLTEXT PDF
Having a brain disease means to have different scores on emotion, cognition, and behavior inventories than the population mean. Also different from the population mean is the ability of an artist to create evocative things. Whether is a piece of music or a painting (or in my case a simple straight line), whether we like it or not, most of us agree that we couldn’t have done it. Also, artists show a decrease in practical reasoning, just like the schizophrenics.
Power et al. (2015) sought to find out if there is a link between being creative and having schizophrenia or bipolar disorder. Lucky for them, the north-European countries keep detailed medical and genetic databases of their population: they had access to 5 databases from Iceland, Sweden, and Netherlands, featuring tens to hundreds of thousands of people.
The authors analyzed hundreds of thousands of individual genetic differences (i.e. SNPs = single nucleotide polymorphisms) that had been previously linked with schizophrenia or bipolar disorder. As a side note, some of this data was obtained by inviting citizens to voluntarily fill out a detailed medical questionnaire and donate blood for DNA analysis. A staggering amount of people agreed. I wonder how many would have done so in U.S.A….
Anyway, the authors defined creative individuals (artists) as “those having (or ever having had) positions in the fields of dance, film, music, theater, visual arts or writing” (online supplemental methods), including those teaching these subjects. And they found out that the same genetic makeup that increases the risk of developing schizophrenia or bipolar disorder also underlies creativity. This link was not explained by education, age, sex, or shared environment.
The study also knocked down an evolutionary explanation for the persistence of schizophrenia and bipolar disorders in the genetic pool. The hypothesis posits that we still have these devastating brain disorders because they come with the side effect of creativity that offsets their negative fitness; but that does not hold, as the artists in this study had less children than the average population. Authors did not offer an alternative speculation.
Reference: Power, R. A., Steinberg, S., Bjornsdottir, G., Rietveld, C. A., Abdellaoui, A., Nivard, M. M., Johannesson, M., Galesloot, T.E., Hottenga, J. J., Willemsen, G., Cesarini, D., Benjamin, D. J., Magnusson, P. K., Ullén, F., Tiemeier, H., Hofman, A., van Rooij, F. J., Walters, G. B., Sigurdsson, E., Thorgeirsson, T. E., Ingason, A., Helgason, A., Kong, A., Kiemeney, L. A., Koellinger, P., Boomsma, D. I., Gudbjartsson, D., Stefansson, H., & Stefansson K. (July 2015, Epub 8 June 2015). Polygenic risk scores for schizophrenia and bipolar disorder predict creativity. Nature Neuroscience, 8(7):953-5. doi: 10.1038/nn.4040. Article+Nature comment