Have Yourself a Merry Literature Christmas

By Gesa Junge, PhD

 

Now that Halloween and Thanksgiving are over, it seems that the world is moving full-speed towards Christmas. And while TV has Christmas adverts and Christmas specials, and the frequency of Christmas songs on the radio has been steadily increasing, what does Christmas look like in the world of scientific publishing? Interestingly, a Pubmed search for “Christmas”, has over a thousand results with “Christmas” in the title.

Some of these papers focus on holiday-related injuries, such as burns or falls. For example, one study analysed burn injuries due to Christmas decorations-associated fires, and while these are fairly rare, the majority of them actually occur after the holiday, presumably due to trees and wreaths drying out and becoming more flammable. Researchers in Calgary observed that several trauma patients were injured while installing Christmas lights, and this along with statistics showing increased risk of falls during winter months, prompted them to study this correlation. Most people in this study fell off of ladders or roofs, and most patients were male and middle-aged. The study also found that several patients sustained serious injuries, with  20% of patients requiring admission to the ICU and the median duration patients stayed in hospital being just over 2 weeks (15.6 days, range 2-165). This

Another study looked into blood alcohol content after consumption of commercially available (notably not homemade) Christmas pudding for lunch, measuring ethanol content of the pudding and then calculating what the blood alcohol content would be immediately after pudding consumption and 30 minutes later. The maximum blood alcohol content did not exceed 0.05g/dL  and the authors conclude that “[h]ospital staff should feel confident that the enthusiastic consumption of Christmas pudding at work in the festive season is unlikely to affect their work performance […]”, as long as they ate less than 1kg of it.

There is also an interesting paper which addresses the question of how to win the Christmas cracker pull. This is a UK-based tradition, in which two people pull on opposite ends of a Christmas cracker until it splits into two uneven pieces, and the person who ends up holding the larger piece wins the usually completely useless plastic toy inside the cracker. The study distinguishes between three techniques: The QinetiQ strategy (two-handed pull, slightly downwards), the passive-aggressive strategy (two-handed grip, but letting the other person pull) and the control strategy (both sides pull approximately parallel to the floor). Turns out, the passive aggressive strategy is the one most likely to lead to a win (92% probability, 95% CI 0.76-1), at least with regards to Christmas crackers.

The results of the Christmas cracker and Christmas pudding studies are published in the same issue of the Medical Journal of Australia alongside a few other brilliant Christmas-related papers, one of which offers a diagnosis of “patient R”, suffering from a shiny lesion on his nose that severely affected his quality of life. The paper suggests lupus pernio may be the unifying diagnosis.

Finally, a group of researchers in Denmark set out to show that there is indeed such a thing as “the Christmas spririt”. This is not a well-defined state, but rather a generally joyful state brought about by decorations, food and smells associated with Christmas. The researchers showed people images with a Christmas theme (e.g. a street in the dark decorated with lights, or a plate of Christmas cookies decorated with a Santa figure and Christmas baubles) and similar images with nothing Christmas-associated (e.g. a regular street, or a plate of cookies on a kitchen counter with no decoration) while monitoring brain activity in a functional MRI scanner. They studied ten people who had celebrated Christmas from a young age (the Christmas group) and ten people who did not celebrate Christmas (the non-Christmas group). Both groups showed increased activity in the primary visual cortex when being shown Christmas-themed images compared to everyday images, but the Christmas group also showed greater activity in several brain regions that did not occur in the non-Christmas group, including the primary motor and premotor cortex, the right inferior/superior parietal lobule, and the bilateral primary somatosensory cortex. This suggests that people who have a strong association with Christmas traditions and celebrations respond differently to Christmas-themed images than people who have no association with Christmas. However, how exactly those brain areas bring about the mysterious Christmas spirit is not clear.

So in conclusion, please be safe when installing holiday lights and keep an eye on the candles, but do feel free to eat Christmas pudding while passive-aggressively pulling Christmas crackers, and if you still can’t seem to find the Christmas spirit, go get a functional MRI scan. Merry Christmas and Happy Holidays!

STAP cells: too good to be true?

 

By Sally Burn

日本語版はこちらからどうぞ

Two weeks ago we reported on the groundbreaking creation of STAP (stimulus-triggered acquisition of pluripotency) cells by scientists at the RIKEN Center for Developmental Biology in Japan. STAP cells are generated by subjecting specialized cells to physical stress, causing them to reprogram back to a pluripotent state from which they can differentiate into both embryonic and extra-embryonic tissues. The pair of Nature papers [1,2] reporting these findings were received with both excitement and surprise by the scientific community. Excitement because the implications were so fantastic (patient-specific regenerative medicine) and surprise because the system was so amazingly simple. The main stressor used in the experiments was acid; the mature starter cells were subjected to acidic conditions and this was sufficient to send them on the road to reprogramming. At the more positive end of the response spectrum, commentators simply asked: “can it really be that simple?” Responses at the other end were pricklier, with a number of eminent scientists dousing the findings with skepticism.

A number of events have now occurred since publication: a formal investigation by RIKEN into the authenticity of the papers, crowdsourced attempts to repeat the experiments, and successful requests for open access to the papers. The bare bones of the case are this: RIKEN is reacting to concerns raised on blogs by other researchers about the papers plus a 2011 paper by the same author, Dr. Haruko Obokata. These concerns are two-fold: first, that images have been reused in different figures to show different things; secondly, that the results cannot be reproduced by other labs. The first concern is what RIKEN is actually investigating and is potentially explicable (human error). The second issue is not under formal investigation but it is this aspect that has made the news, with reports generally stating that the experiments are flawed and that this is why RIKEN is investigating.

The positive aspects of these events are actually quite exciting. Firstly, freely accessible blog-based discourses between scientists attempting to replicate the studies and sharing their data have sprung up. Secondly, Nature has acquiesced to requests to make the original papers open access. The negative aspect of the fallout, however, leaves a particularly bad taste in my mouth. Perhaps it is just the coverage I’ve seen or the conversations I’ve had, but there is a distinct air of a witch-hunt about the whole affair.  The deterioration in opinion from excitement to “I told you so” sanctimony has been rapid. A number of the comments on the STAP reproducibility blogs are pure nit-picking. One of the blogs even holds weekly public opinion polls which serve no scientific purpose on whether we think the data is real or not (64% of respondents felt on the “not” side of the fence at the time of writing).

I will hold my hands up at this point and say that I am biased in hoping that Dr. Obokata will be exonerated. I was enthralled when I read about her: 30 years old and running her own lab at RIKEN?! I read interviews in which she described the pain of working on her research to the exclusion of all else and being met with constant criticism and skepticism. No one seemed to believe her system worked so she repeatedly went back and tried again, adding more controls, building extra layers of evidence. It apparently took five years from the first time Nature saw an earlier incarnation of the paper to them accepting it. But finally a team of peer reviewers accepted the manuscripts and at last, for a short while, her work was finally validated. Overnight Obokata became a global science sensation; STAP cells were everywhere. She became my new role model: an exceptionally successful young independent female scientist. The important part of that accolade is “independent female”. Achieving scientific independence is exceptionally difficult in 2014. The pool of PhD students and postdoctoral researchers in the biological sciences is drastically larger than it was 20-30 years ago. The number of permanent Principal Investigator (PI) positions, however, is not. Moreover, males greatly outnumber females at the PI level (for a great discussion of this see this). There are therefore fewer visible role models for girls getting into STEM. So I am absolutely, unashamedly biased in hoping that Obokata will be exonerated.

If Obokata’s studies are, however, found to be flawed it will result in the loss of a promising role model for women in STEM. It may also damage the public’s perception of the scientific process. The investigation has been reported by many major media outlets. The knee-jerk response by the public may be to view research as unregulated and to conclude that this kind of affair is commonplace – after all, Nature is one of the most esteemed and well known journals. Similar responses by those with involvement in science funding would be an even worse consequence, which is why we need to take measures to reduce the incidences of not just scientific fraud but also the potential for perceived fraud.

Nature is a top-ranked journal and submitted manuscripts undergo strict peer review. Obokata had faced years of skepticism and accordingly had added more and more experiments to strengthen her case. So what could be done differently? One answer lies with the authors themselves as image duplication can occur as a result of human error. It really shouldn’t, particularly not when you’re submitting your career-defining dogma-shaking paper to Nature, but it does – even more so when the stakes are that high. Despite what the public might think, science isn’t performed in a vacuum by moon-suited automatons; it’s usually performed by tired, overworked, underpaid graduate students and postdocs. They are humans and they are likely humans who are running on a diet of black coffee and seminar pizza. They have homes and lives, but they’ve probably not seen those things in the last 16 hours. They mess up. I’ve done it. You, dear reader, have done it. And messing up is even more likely when under pressure (conference poster due in an hour; carrying out novel experiments at midnight to avoid being scooped). What can be done to prevent such mistakes? No single thing will fix the problem. Indeed it may even be the personality of many scientists which increases the risk of such errors: working too hard, too obsessively on a problem to the extent they do not function optimally. However, improving working conditions would certainly help, as would less emphasis on “publish or perish”. The pressure to publish also often underlies actual falsification of data. If you don’t get the data you don’t get the paper and you can kiss goodbye to tenure.

If the image duplications were genuinely due to human error, this brings us to the subject of transparency. Should RIKEN have announced their investigation? By doing so the press has had a field day informing the public that the investigation happened because the science is bogus. Even if Obokata is cleared of any wrongdoing she will always carry the stigma of being accused of fraudulence. Despite this, my answer is: yes. Investigations into scientific malpractice should absolutely be made public. Open access publishing has made great headway in the last decade with bringing science to a wider audience, and we need to do all we can to promote transparency in the scientific process.

In fact, this investigation could end up being a key event in the drive to make science more openly accessible. A very interesting thing happened in the days after the STAP articles were published. Blog forums started in which scientists from around the world shared their experiences of trying to repeat the purportedly simple acid-reprogramming experiments (see particularly PubPeer [3,4] and Paul Knoepfler’s blog). Such openness is rare with unpublished data, but replicating published data opened the door to free communication. At the time of writing the PubPeer page for the first of the two STAP papers had been viewed over 27,000 times. Twitter is also abuzz with commentary on the STAP situation and Reddit joined in when an alleged technician from the lab of Charles Vacanti (the papers’ final author) started discussing the protocol with interested parties.

Thus far no one has managed to fully reproduce Obokata’s results (ten attempts have been reported). Nature has not launched an official inquiry so far but, intriguingly, seems to be conducting its own crowdsourced investigation: “None of ten prominent stem-cell scientists who responded to a questionnaire from Nature‘s news team has had success”. This does not necessarily mean that the original data was fraudulent. Scientific data should, by definition, by reproducible but all too often it’s not the case. We all know of someone who had the most amazing cell system setup but it only works with one batch of media, or the animal experiments affected by external factors (water, food, stress). Again, science is not the sterile logical beast the public perceive it to be. If you perform an experiment twenty times and it works, then you get a new batch of media and it suddenly fails, do you scrap all your data? The answer idealistically is yes but if you are under pressure to publish all too often it will be no. You publish. Or perish…

It may well be that there is something about Obokata’s lab’s setup that isn’t being accurately reproduced elsewhere (the presence of her infamous “good luck” pet turtle perhaps?). For starters, most of the attempted repeats did not use the same starter cells as in the STAP papers. Furthermore, while her experiments were universally reported as “simple”, they are also something she has been expertly honing for many years. Indeed, a co-author, Teruhiko Wakayama, found that he could not create STAP cells outside of Obokata’s lab but he is adamant that he generated them during his time there. In an ideal world external scientists need to go to Obokata’s lab, be trained by her and use her lab resources to reproduce the data.

A final positive outcome of this case is that Professor Paul Knoepfler, curator of the STAP reproducibility blog, took to Twitter to ask Nature to make the STAP papers open access (Nature is usually accessed via an institution subscription or by paying a large fee per article). Nature agreed and now anyone in the world can read the papers for themselves – a victory for open access publishing. A further request by would-be STAP creators is for the authors to release detailed methods; this currently remains unfulfilled. Indeed, Obokata herself has not issued any comment. But then again she is only being investigated for potentially accidental image duplications – a fact being overlooked by many. I sincerely hope that she will be cleared of any wrongdoing, not just because of the otherwise damaging effects on her career and the public perception of science, but also because her findings are just so amazingly groundbreaking. In the words of the X-Files: I want to believe.

Want to stay on top of what’s happening with STAP cells? Create your free feed today.

"Science" Opens Up

 

By Celine Cammarata

When biologist Randy Schekman denounced high impact journals just after winning the Nobel Prize last year, his cry joined a rising tide of voices in favor of open access.  Almost exactly a year ago, Nature announced its partnership with the open access publisher Frontiers, and had even before that offered authors the option of publishing their papers under a creative commons license in exchange for a processing fee. Considering this climate, I was hardly surprised when the journal Science announced last week, via an editorial, that it will be launching a new, open-access digital journal, Science Advances.

The open access debate is often framed in terms of (not surprisingly) accessibility of research, specifically whether papers are free to read. Science, however, has approached the issue predominantly in terms of publishing space.  “The research enterprise has grown dramatically in the past decades in the number of high-quality practitioners and results,” editor in chief Marcia McNutt and AAAS CEO and Science Executive Publisher Alan Leshner state, “but the capacity for Science to accommodate those works in our journal has not kept pace.”  Science Advances, the authors explain, was born out of the desire to stop turning away potentially important papers, as the journal claims it is forced to do by space limitations.  Increased publishing space will also allow the journal to further diversify the research areas represented.

Science downplayed matters of subscription fees and availability, mentioning only briefly that in order to better serve scientists with limited resources and better disseminate authors’ work, the journal will be available to anyone with an internet connection.  As in many open access journals, the costs normally covered by subscriptions will be displaced by authors paying a processing fee.  Science is aiming to keep those costs down, though: Science Advances’ administrative side will be run out of the existing Science offices, and the journal will not feature any commentary or editorials – only research articles and reviews.

The journal also aims for rapid publication; rather than being released weekly, as in its parent journal, papers published in Science Advances will appear as soon as they are ready.  Furthermore, papers rejected from Science’s traditional journals on the basis of space can be shuttled right along for review at Science Advances, to “speed publication, alleviate the burden on the reviewer community, and reduce the risk to authors of having to resubmit elsewhere.”  Of course, this system arguably also protects Science from inadvertently turning away the paper of the century only to have it published by a competing journal.

Despite the gathering steam of open access proponents, high impact journals largely continue to hold center stage; in this filed where prestige is often the going currency, it takes a certain degree of faith and commitment for investigators to choose less established open access platforms over well known traditional journals when it comes time to publish.  Will open access projects from Nature and Science help bridge this divide?  Science apparently anticipates the reputation of its traditional journals to extend to Science Advances; in relating the motivation behind the new journal, McNutt and Leshner mention that “although other journals provide publishing venues for more papers, many authors still desire to be published in Science.”  But the announcement also come a year before Science Advances is set to launch, and before the journal’s editors have even been recruited, making the venture seem rather rushed.  Is Science joining the open access scene truly reflective of, and contributing to, a paradigm shift, or is the journal only seeking to cover its… bases?  Perhaps only time will tell.

A New Approach to Publishing Science Research

 

Thalyana Smith-Vikos

By perusing a recent article published in the online journal eLife, I noticed a few aspects of the article that I definitely wasn’t expecting when scrolling to the bottom of the page. The decision letter, sent to the authors when their article was accepted for publication, is included, along with the names of the peer reviewers and their comments. Additionally, the authors’ response to the reviewers’ comments is also included.

eLife explains that the authors and reviewers have given their approval to include this information, and that only the major concerns identified in the reviewers’ comments are shown. Still, I was surprised to see that this information  was accessible! I then discovered that other journals, including EMBO Journal from Nature Publishing Group, also follow this format of transparency, in which all of the letters between authors and editors, including reviewers’ comments, are listed on the journal’s website for each accepted article. I’m not sure how often people carefully look through these letters or are just happy reading the paper, but the principle still stands that this information is now made public.

I immediately began to ponder why eLife and other journals have decided to include this information with published articles. As someone who has both received and written decision letters and peer review comments, I can attest that this is a very sensitive process that most scientists like to keep private. Every scientist knows that even if a paper is accepted for publication, there may still be some aspects that need to be ironed out, but for a non-scientist reading the decision letter online, this may generate confusion: they may think, why was this paper accepted if there were still issues with it?

However, displaying this information can provide more credibility to the peer review process in general: scientists and non-scientists alike can see that this paper was rigorously reviewed by experts in the field, and a lot of time and effort went into improving the manuscript based on these comments. With the rise of “predatory” journals that lack a real peer review process, we can clearly observe that eLife and other journals maintain the high standards of peer review. Sometimes it is easy to guess how long the peer review process took for any paper in any journal, because each journal will report the dates the article was received and accepted after review; however, eLife has taken it a step further and displayed the entire review process for us to assess. This extra information allows for the reader to develop a much more informed opinion of the article; readers now have the entire “backstory” of an article at their fingertips, and witnessing how the article has been revised prior to publication provides a deeper understanding of and greater appreciation for the final product.

Additionally, I was surprised that the reviewers’ names and comments were revealed. Anonymity of reviewers helps to keep the publication process more objective from the authors’ point of view. Sometimes authors may try to guess who the reviewers are, as the authors themselves can suggest or exclude potential reviewers, but that final anonymity also acknowledges that the reviewers’ comments must be respected without any personal bias. This also allows the reviewers to feel more comfortable with writing their analysis of the paper. On the other hand, as this new method  of peer review gains more visibility, I think scientists could also be accepting of this lack of transparency. If the status quo is not to give comments anonymously, then authors, reviewers and editors can be equally prepared to handle this scenario. By  revealing reviewers’ names, this provides assurance to the authors and to readers that the journal has indeed found experts who are appropriate reviewers for the manuscript.

I should also mention that F1000Research, the new open-access journal from Faculty of 1000, has developed a somewhat related publication process. F1000Reserach is truly the first “open science publisher”: after an article is submitted, it is almost immediately published online (with all of the raw data included) after a quick in-house check for any major concerns. Then, the post-publication peer review ensues, in which F1000 users can post comments on the paper using a forum-type discussion model. This certainly instills the principle that science is a community, and that any scientist’s work should be assessed by this community in order to draw the best conclusions from the data. Additionally, F1000 Research nominates experts in the field to be peer reviewers (whose names and affiliations are disclosed), as well.

Articles which are then revised based on peer review comments will be announced on the F1000 Research site, and only these revised papers will be indexed on PubMed and other databases. In addition to “revised”, articles can also receive the tag of “update”, if, for example, there is an update to a software release or other type of technology the paper describes. Articles can also be labeled as “follow up”, which are cited separately and provide new information to a review or opinion piece.  All in all, the F1000Research models allows work to be published much more quickly, and it increases the community of reviewers. As science is fast-moving, new updates and insights to previously published work can be quickly added to keep up with the pace.

 

What do you think? Do you think it’s beneficial to have all the review information disclosed?

Follow Thalyana on Twitter @ThalyanaScience

Dry Science: The Good, The Bad, and The Possibilities

Celine Cammarata

Recent years have seen a boom in so-called “dry lab” research, centered around mining large data sets to draw new conclusions, Robert Service reports in Science.  The movement is fueled in part by the increased openness of information; while research consortia used to hold rights to many such data banks, new efforts to make them freely available have unleashed a wealth of material for “dry” scientists.  So what are some of the pros and cons of this growing branch of research?

 

Computer-based research of large, publicly available data can be a powerful source of information, leading to new insights on disease, drug treatments, plant genetics, and more.  One of the most commonly encountered methods is the Gene Wide Association Study, or GWAS, whereby researchers look for genetic traces of disease.  Such research is strengthened by the ability to collect huge amounts of data, increasing n values without having to recruit new participants.  Another perk of dry research is the increased mobility for researchers to hop among different areas of study; with no investment in maintaining animals or lab equipment specialized to any single line of investigation, researchers can study cancer genetics one year and bowel syndromes the next with little difficulty.

 

But getting the large amounts of data that fuel dry research can be more complicated than it seems.  Some investigators are reluctant to make their hard-earned numbers publicly available; others lack the time and manpower to do so.  And slight variations in how initial studies are conducted can make it challenging to pool data from different sources.  Furthermore, GWAS and similar experiments themselves are deceptively complicated.  Most diseases involve complicated combinations of genes turned on and off, making it hard to uncover genetic fingerprints of illness, and comparing the genomes of many subjects frequently leads to false signals.  For dry research to continue growing successfully, significant advances in programming and in mathematical techniques to analyze data will be required.  Finally, making data freely open for investigators to delve into raises concerns about subject confidentiality.

 

Finally, the increase in data availability raises intriguing questions about the future of research.  Currently, dry research requires complex programs and hefty computer power, but with computer science getting ever better, will future generations need a lab to do science?  Will anyone with a decent computer and some scientific know-how be able to contribute meaningfully to the research community?  And if so, what will this mean for the traditional university-based research world?  Only time will tell.

Talking About Talking – Important Conversations About Science Communication

Celine Cammarata

Communication is part of the very blood of science – whether it’s the busy circuit of meetings to attend or the constant pressure to publish papers, sharing and discussing work is a central aspect of research.  But communication itself is a source of ongoing conversation.  The recent special section on communication in Science highlights some of the key topics, and also gives a glimpse of some of the primary tensions, such as openness versus confidentiality and traditions versus new technology.

 

Open-access publishing has gotten a lot of attention lately (see this recent piece in Nature), and with good reason.  Many scientists feel strongly about the importance of making research freely available to the public.  But do open-access journals lower standards for publishing?  John Bohannon’s sting operation, in which he submitted a blatantly flawed fake paper to hundreds of open-access journals – many of which accepted it – raises doubts about whether claims of peer review are to be trusted.

 

Of course, this is not necessarily an issue restricted to open-access journals.  A more intrinsic concern regards broad availability of research in general.  It is tempting to say that research should be freely available to all, but what about work that could have ill effects (think back to the raging debate over research on the bird flu)?  David Malakoff describes the struggles of investigators whose work has the potential to be use in weaponry, expose important preservation sites, or that otherwise might be better off kept quiet. These issues in turn raise other sensitive questions: are there some areas that are just taboo?  How much government oversight of research is too much?  When do publishing guidelines become censorship?  It is also important to appreciate that scientists are fairly good at self-regulating their work – no investigator aims to aid terrorist or otherwise cause danger, and researchers have generally shown great responsibility around such issues.

 

Open-access journals are also a poster child for newer forms of science communication and the ways in which technology is changing things.  But how much change is there really?  While scientists are intrinsically quite innovative, the field is also steeped in culture and tradition.  Diane Harley finds that while many scientists laude newer forms of communication and a shift away from published papers as a metric of success, little change is actually occurring.  This is in large part because tenure and promotions still rest, predominantly, on candidates’ publishing history, removing incentives and increasing risk of pursuing less traditional means of communication.

 

So how do we move forward?  Science Editor-In-Cheif Marcia McNutt points out that as researchers, the most logical way forward is to research our options.  Why not set up studies of different peer review techniques, for example, and actually find out experimentally what works best?  By asking ourselves, and each other, the hard questions and collecting empirical information about the most successful practices, we will begin to lay the groundwork for improving science communication.