By Celine Cammarata
For years, groups such as the Cochrane Collaborative and the Campbell Collaboration have worked to support and promote systematic review of medical and social policy research, respectively. These reviews can then help decision-makers and practitioners on the ground – doctors, public health officials, policy developers, etc. – to make scientifically based choices without having to wade through hundreds of journal articles and sort the diverse fragments of evidence provided. In a Lancet editorial last November, authors Iain Chalmers and Magne Nylenna expounded on how systematic reviews are critical for those within science as well, particularly in the development of new research. Given these lines of reasoning, should we as scientists try to elevate systematic review to a more esteemed position in the world of research?
Systematic reviews differ from traditional narrative-style reviews in several ways. Traditional reviews generally walk readers through the current state of a field and provide qualitative descriptions of the most relevant past work. In contrast, systematic reviews seeks to answer a specific research question, lays out a priori criteria to determine which studies will and will not be included in the review, uses these criteria to find all matching work (as much as possible), and combines all this evidence to answer the question, often by way of a meta-analysis.
Chalmers and Nylenna argued that many scientists fail to systematically build future work upon a thorough evaluation of past evidence. This, the authors believe, is problematic both ethically and economically, as it can lead to unnecessary duplication of work, continued research on a question that has already been answered, and waste of research animals and funding (see the Evidence Based Research Network site for more on research waste). Moreover, research synthesis as supported by Cochrane and Campbell helps package existing scientific findings into something that practitioners can use, thus greatly facilitating translational research – one of science’s hottest buzzwords, and with good reason. On the flip side, as Chalmers and Nylenna argue, if a field does not actively synthesize it’s findings, this can cause inefficiency in answering overall research questions that can have significant consequences if the issue at hand has important health implications.
I think there are many reasons large-scale research synthesis is currently less-than-appealing to scientists. On the production side, preparing a systematic review can be extremely time consuming, and generally offers little career reward. On the usage side, some researcher may not consider a systematic review necessary or even preferable as a basis for future work – they may feel that less systematic means are actually better suited to the situation, for instance if they have less confidence in some findings than others based on personal knowledge about the study’s execution. Additionally, investigators may consider narrative reviews to be a sufficient to basis for future studies even if these reviews do not employ meta-analysis, for instance if such narrative reviews were authored by leaders in the field whose expertise and scientific judgment is respected.
What would it look like to put research synthesis in a position of greater prominence? For one thing, as mentioned above, contributing to reviews would likely have to be incentivized if investigators are to be enticed away from their busy schedules, so this would constitute a change in the current academic reward structure. In addition, if scientists saw research synthesis as more valuable than individual high-priority papers, this might both necessitate and foster a more collaborative attitude. Doing research with the explicit goal of making it usable to those who will build off it and filling specific holes in the current body of knowledge may drive very different experiments than does a goal of producing exciting, flashy papers (obviously this is not an either-or situation – in fact I think the vast majority of scientists work somewhere in the middle of the spectrum between these poles).
One step in this direction might be the growing movement of data sharing. Another might be greater coordination within a field about methodology and research questions, which could streamline synthesis. For example, a recent Campbell review on Cognitive-Behavioral Therapy found that of 394 potentially relevant studies, only 7 were ultimately eligible for inclusion in the review, indicating that many investigators either used insufficiently rigorous methodology, fell short of fully reporting data, or prioritized different design aspects than those review authors needed to address the question at hand.
Should these changes be made? To me, this remains somewhat opaque. Arguments such as Chalmers and Nylenna’s are strong, and a focus on synthesis could come hand-in-hand with some refreshing changes in how science is done. But systematic review is not the only tool in the toolbox. For now, it remains a choice each scientist will have to make for her or himself.