Dry Science: The Good, The Bad, and The Possibilities

Celine Cammarata

Recent years have seen a boom in so-called “dry lab” research, centered around mining large data sets to draw new conclusions, Robert Service reports in Science.  The movement is fueled in part by the increased openness of information; while research consortia used to hold rights to many such data banks, new efforts to make them freely available have unleashed a wealth of material for “dry” scientists.  So what are some of the pros and cons of this growing branch of research?

 

Computer-based research of large, publicly available data can be a powerful source of information, leading to new insights on disease, drug treatments, plant genetics, and more.  One of the most commonly encountered methods is the Gene Wide Association Study, or GWAS, whereby researchers look for genetic traces of disease.  Such research is strengthened by the ability to collect huge amounts of data, increasing n values without having to recruit new participants.  Another perk of dry research is the increased mobility for researchers to hop among different areas of study; with no investment in maintaining animals or lab equipment specialized to any single line of investigation, researchers can study cancer genetics one year and bowel syndromes the next with little difficulty.

 

But getting the large amounts of data that fuel dry research can be more complicated than it seems.  Some investigators are reluctant to make their hard-earned numbers publicly available; others lack the time and manpower to do so.  And slight variations in how initial studies are conducted can make it challenging to pool data from different sources.  Furthermore, GWAS and similar experiments themselves are deceptively complicated.  Most diseases involve complicated combinations of genes turned on and off, making it hard to uncover genetic fingerprints of illness, and comparing the genomes of many subjects frequently leads to false signals.  For dry research to continue growing successfully, significant advances in programming and in mathematical techniques to analyze data will be required.  Finally, making data freely open for investigators to delve into raises concerns about subject confidentiality.

 

Finally, the increase in data availability raises intriguing questions about the future of research.  Currently, dry research requires complex programs and hefty computer power, but with computer science getting ever better, will future generations need a lab to do science?  Will anyone with a decent computer and some scientific know-how be able to contribute meaningfully to the research community?  And if so, what will this mean for the traditional university-based research world?  Only time will tell.

Who Was Stung – Open Access or Peer-Review?

Neeley Remmers

You may have noticed this week that the Science world is abuzz with talk about Open Access Journals and the dangers of publishing in these journals versus traditional. The debate about whether or not to publish in Open Access journals is not new, but the debate has escalated due to the sting article published in Science written by John Bohannon. After reading the article, instead of questioning the credibility of Open Access Journals I was left questioning the failed peer review process that resulted in the acceptance of the fake articles. If you are unaware of the controversy behind the Open Access movement, here is a brief synopsis. As you may have noticed, online publishing is the hottest thing since sliced bread in the world of publication (just think of the huge sales brought in by the invention of tablets and e-readers). This has led to the creation of online scientific journals that earn a profit through authorship fees rather than relying on subscription fees like most magazines, and they publish their articles online so that the general public can read them for free. Those who favor traditional magazines (think Science, Nature, Cell) that require an active subscription or require you to purchase the article before you can read it, claim that the Open Access movement has led to the increased publication of poor-quality science. Some take it even further to say that by publishing in Open Access journals, you effectively drive your career into the dumpster as these journals are a “dumping-ground” for articles that are rejected at the “more prestigious” traditional journals.

Personally, I commend the Open Access movement for making research articles more readily available. I cannot count the number of times I would run into a road block with literature searches because my library did not have a subscription to a journal that published an article that had useful information for my projects. And let’s face it, unless you are a full-time professor with a couple R01 grants supporting your salary, it simply isn’t feasible for most to pay $30+ for an article that may or may not be entirely useful for your project.

Getting back to the sting, here is a brief summary of what went down for those who have not read the article yet. Bohannon composed an article containing data so inaccurate he claimed that anyone with a high school level knowledge of chemistry could recognize the lack of scientific soundness. He chose to submit this falsified paper to over 300 Open Access journals where just over 50% of the journals accepted the paper after asking for trivial revisions. In an article written by Curt Rice reflecting on this sting, you will find a more in-depth explanation of the sting itself and Open Access movement than what I provided here, a look into the peer review system, the corruption that comes with heightened pressure to publish, and flaws with the current publication process. Rice points out that what this sting really brings to light is the corruption that has ensued in the last few years in publishing by charging overpriced author fees, which can be seen in both Open Access and traditional journals, and the flaws in the current peer-review system that allows bad science to get published and how all journals are vulnerable to this. This in turn, is in-part facilitated by the increased pressure on scientists to publish and increased work-load of reviewers struggling to keep up (see Celine’s recent blog for more thoughts on the current state of scientific communication).

Personally, I agree with Rice in that this sting does not point a bad finger at Open Access (even though it was written in that context), but rather points out the flaws in the current scientific publication system and calls for changes to be made. Moral of the story, this sting really enforces the practice of critically reading articles to evaluate their scientific soundness on your own before accepting the results and conclusions

Talking About Talking – Important Conversations About Science Communication

Celine Cammarata

Communication is part of the very blood of science – whether it’s the busy circuit of meetings to attend or the constant pressure to publish papers, sharing and discussing work is a central aspect of research.  But communication itself is a source of ongoing conversation.  The recent special section on communication in Science highlights some of the key topics, and also gives a glimpse of some of the primary tensions, such as openness versus confidentiality and traditions versus new technology.

 

Open-access publishing has gotten a lot of attention lately (see this recent piece in Nature), and with good reason.  Many scientists feel strongly about the importance of making research freely available to the public.  But do open-access journals lower standards for publishing?  John Bohannon’s sting operation, in which he submitted a blatantly flawed fake paper to hundreds of open-access journals – many of which accepted it – raises doubts about whether claims of peer review are to be trusted.

 

Of course, this is not necessarily an issue restricted to open-access journals.  A more intrinsic concern regards broad availability of research in general.  It is tempting to say that research should be freely available to all, but what about work that could have ill effects (think back to the raging debate over research on the bird flu)?  David Malakoff describes the struggles of investigators whose work has the potential to be use in weaponry, expose important preservation sites, or that otherwise might be better off kept quiet. These issues in turn raise other sensitive questions: are there some areas that are just taboo?  How much government oversight of research is too much?  When do publishing guidelines become censorship?  It is also important to appreciate that scientists are fairly good at self-regulating their work – no investigator aims to aid terrorist or otherwise cause danger, and researchers have generally shown great responsibility around such issues.

 

Open-access journals are also a poster child for newer forms of science communication and the ways in which technology is changing things.  But how much change is there really?  While scientists are intrinsically quite innovative, the field is also steeped in culture and tradition.  Diane Harley finds that while many scientists laude newer forms of communication and a shift away from published papers as a metric of success, little change is actually occurring.  This is in large part because tenure and promotions still rest, predominantly, on candidates’ publishing history, removing incentives and increasing risk of pursuing less traditional means of communication.

 

So how do we move forward?  Science Editor-In-Cheif Marcia McNutt points out that as researchers, the most logical way forward is to research our options.  Why not set up studies of different peer review techniques, for example, and actually find out experimentally what works best?  By asking ourselves, and each other, the hard questions and collecting empirical information about the most successful practices, we will begin to lay the groundwork for improving science communication.

The Biotech Bust

Celine Cammarata

We’ve all heard stories of those who left academia for the greener fields of biotechnology, leaving the days of strapped grad school stipends far behind.  Indeed, the industry is doing remarkably well. But, reporter Heidi Ledford cautions in Nature this week, that might be about to change. Biotech growth has been particularly strong lately, with nearly record-breaking numbers of firms going public this year, their Initial Public Offerings bringing nearly $2 billion into the industry.  A number of factors may have contributed to this surge, including growing investor confidence as older biotech companies offer up promising results and increased ease in getting new drugs approved. But will this turn out to be a bubble?  It’s hard to tell, in part because it is difficult to determine just how much biotech firms are worth and, thus, whether they are being valued accurately. Many new IPOs come from companies that have yet to produce a product and may not even yet have clearly defined project aims.  While the caliber of the scientists and leaders at the helm of some biotech firms makes the success of their enterprises – whatever they end up to be – seem likely, investors may still turn wary until clinical trials and other tests start rolling in.  In the meantime, the industry may need to proceed with caution to maintain growth and avoid bursting the bubble.

Measure for Measure

Celine Cammarata

What exactly makes “good” science?  To some researchers’ chagrin, a group of universities in the UK believe they have it figured out.

The Snowball project, brainchild of eight premier British universities, seeks to set forth a universally applicable set of measures for determining the quality of research and the success of investigators, focusing on factors such as number of grants and patents and the frequency with which papers are cited.  As described by science policy writer Colin Macilwain, the project is part of a growing trend of “science metrics.”  But what does this mean for science and scientists?

Two main issues arise around the use of such measures.  Continue reading “Measure for Measure”

Leafing through the Literature

Thalyana Smith-Vikos

Highlighting recently published articles in molecular biology, genetics, and other hot topics

Small Molecules Achieve Pluripotency

Hou et al. have reached uncharted territory in stem cell research: rather than achieving pluripotency using the well-established transcription factor cocktail or recent advances in somatic cell nuclear transfer, mouse somatic cells were reprogrammed to generate pluripotent stem cells with a frequency of 0.2% using a cocktail of seven small molecules. These reprogrammed cells, termed chemically induced pluripotent stem cells (CiPSCs), were shown to resemble embryonic stem cells (ESCs) based on gene expression and epigenetic profiling, which is not case for other types of iPSCs.

Tissue and Organ Generation from Pluripotency

Takebe et al. report the first case of successful generation of a three-dimensional vascularized organ  Continue reading “Leafing through the Literature”