Measuring the Value of Science: Keeping Bias out of NIH Grant Review

 

By Rebecca Delker, PhD

Measuring the value of science has always been – and, likely, will always remain – a challenge. However, this task, with regard to federal funding via grants, has become increasingly more daunting as the number of biomedical researchers has grown substantially and the available funds contracted. As a result of this anti-correlation, funding rates for NIH grants, most notably, the R01, have dropped precipitously. The most troubling consequences of the current funding environment are (1) the concentration of government funds in the hands of older, established investigators at the cost of young researchers, (2) a shift in the focus of lab-heads toward securing sufficient funds to conduct research, rather than the research itself and (3) an expectation for substantial output, increasing the demands for preliminary experiments and discouraging the proposal of high-risk, high-reward projects. The federal grant system has a direct impact on how science is conducted and, in its current form, restricts intellectual freedom and creativity, promoting instead guaranteed, but incremental, scientific progress.

 

History has taught us that hindsight is the only reliable means of judging the importance of science. It was sixteen years after the death of Gregor Mendel – and thirty-five years after his seminal publication – before researchers acknowledged his work on genetic inheritance. The rapid advance of HIV research in the 1980s was made possible by years of retroviral research that occurred decades prior. Thus, to know the value of research prior, or even a handful of years after publication, is extremely difficult, if not impossible. Nonetheless, science is an innately forward-thinking endeavor and, as a nation, we must do our best to fairly distribute available government funds to the most promising research endeavors, while ensuring that creativity is not stifled. At the heart of this task lies a much more fundamental question – what is the best way to predict the value of scientific research?

 

In a paper published last month in Cell, Ronald Germain joins the conversation of grant reform and tackles this question by proposing a new NIH funding system that shifts the focus from project-oriented to investigator-oriented grants. He builds his new system on the notion that the track record of a scientist is the best predictor of future success and research value. By switching to a granting mechanism similar to privately funded groups like the HHMI, he asserts, the government can distribute funds more evenly, as well as free up time and space for creativity in research. Under the new plan, funding for new investigators would be directly tied to securing a faculty position by providing universities “block grants,” which are distributed to new hires. In parallel, individual grants for established investigators would be merged into one (or a few) grant(s), covering a wider range of research avenues. For both new and established investigators, the funding cycle would be increased to 5-7 years and – the most significant departure from the current system – grant renewal dependent primarily on a retrospective analysis of work completed during the prior years. The foundation for the proposed granting system relies on the assumption that past performance, with regard to output, predicts future performance. As Germain remarks, most established lab-heads trust a CV over a grant proposal when making funding decisions; but it is exactly this component of the proposal – of our current academic culture – that warrants a more in-depth discussion.

 

Germain is not the first to call into question the reliability of current NIH peer reviews. As he points out, funding decisions for project-oriented grants are greatly influenced by the inclusion of considerable preliminary data, as well as form and structure over content. Others go further and argue that the peer review process is only capable of weeding out bad proposals, but fails at accurately ranking the good. This conclusion is supported by studies, which establish a correlation between prior publication, not peer review score, and research outcome. (It should be noted that a recent study following the outcomes of greater than 100,000 funded R01 grants found that peer review scores are predictive of grant outcome, even when controlling for the effects of institute and investigator. The contradictory results of these two studies cannot yet be explained, though anecdotal evidence falls heavily in support of the former conclusions.)

 

Publication decisions are not without biases. Journals are businesses and, as such, benefit from publishing headline-grabbing science, creating an unintended bias against less trendy, but high quality, work. The more prestigious the journal, the higher its impact factor, the more this pressure seems to come into play. Further, just as there is a necessary skill set associated with successful grant writing that goes beyond the scientific ideas, publication success depends on more factors than the research itself. An element of “story-telling” can make research much more appealing; and human perception of the work during peer review can easily be influenced by name recognition of the investigator and/or institute. I think it is time to ask ourselves if past publication record is truly predictive of future potential, or, if it simply eases the way to additional papers.

 

In our modern academic culture, the quality of research and of scientists is often judged by quantitative measures that, at times, can mask true potential. Productivity, as measured by the number of papers published in a given period of time, is a standard gaining momentum in recent years to serve as a meaningful evaluation of the quality of a scientist. As Germain states, a “highly competent investigator” is unlikely “to fail to produce enough … to warrant a ‘passing grade’.” The interchangeability of competence and output has been taken to such extremes that pioneering physicist and Nobel Prize winner, Peter Higgs, has publicly stated that he would be overlooked in current academia because of the requirement to “keep churning out papers.” The demand for rapid productivity and high impact factor has caused an increase in the publication of poorly validated findings, as well as in retraction rates due to scientific misconduct. The metrics used currently to value science are just as, if not more, dangerous to the progress of science as the restrictions placed on research by current funding mechanisms.

 

I certainly do not have a fail-proof plan to fix the current funding problems; I don’t think anyone does. But, I do think that we need to look at grant reform in the context of the larger issues plaguing biomedical sciences. As a group of people who have chosen a line of work founded in doing/discovering/inventing the impossible, we have taken the easy way out when approached with measuring the value of research. Without the aid of hindsight, this task will never be objective and assigning quantitative measures like impact factor, productivity, and the h-index has proven only to generate greater bias in the system. We must embrace the subjectivity present in our review of scientific ideas while remaining careful not to vandalize scientific progress with bias. Measures to bring greater anonymity to the grant review process and greater emphasis on qualitative and descriptive assessments of past work and future ideas will help lessen the influence of human bias and make funding more fair. As our culture stands, a retrospective review process, as Germain proposes, with a focus on output runs the risk of adopting into the grant review process our flawed, and highly politicized, methods of judging the quality of science. I caution that in parallel to grant reform, we begin to initiate change in the metrics we use to measure the value of science.

 

Though NIH funding-related problems and the other systemic flaws of our culture seem at an all time high right now, the number of publications addressing these issues has also increased, especially in recent years. Now, more than ever, scientists at all stages recognize the immediacy of the problems and are engaging in conversations both in-person and online to brainstorm potential solutions. A new website  serves as a forum for all interested to join the discussion and contribute reform ideas – grant, or otherwise. With enough ideas and pilot experiments from the NIH we can ensure that the best science is funded and conducted. Onward and upward!