The Fake Drug Problem

 

By Gesa Junge, PhD

Tablets, injections, and drops are convenient ways to administer life-saving medicine – but there is no way to tell what’s in them just by looking, and that makes drugs relatively easy to counterfeit. Counterfeit drugs are medicines that contain the wrong amount or type of active ingredient (the vast majority of cases), are sold in fraudulent packaging, or are contaminated with harmful substances. A very important distinction here: counterfeit drugs do not equal generic drugs. Generic drugs contain the same type and dose of active ingredient as a branded product and have undergone clinical trials, and they, too, can be counterfeited. In fact, counterfeiting can affect any drug, and although the main targets, particularly in Europe and North America, have historically been “lifestyle drugs” such as Viagra and weight loss products, fake versions of cancer drugs, antidepressants, anti-Malaria drugs and even medical devices are increasingly reported.

The consequences of counterfeit medicines can be fatal, for example, due to toxic contaminants in medicines, or inactive drugs used to treat life-threatening conditions. According to a BBC article, over 100,000 people die each year due to ineffective malaria medicines, and overall, Interpol puts the number of deaths due to counterfeit pharmaceuticals at up to a million per year. There are also other public health implications: Antibiotics in too low doses may not help a patient fight an infection, but they can be sufficient to induce resistance in bacteria, and counterfeit painkillers containing fentanyl, a powerful opioid, are a major contributor to the opioid crisis, according to the DEA.

It seems nearly impossible to accurately quantify the global market for counterfeit pharmaceuticals, but it may be as much as $200bn, or possibly over $400bn. The profit margin of fake drugs is huge because the expensive part of a drug is the active ingredient, which can relatively easily be replaced with cheap, innate material. These inactive pills can then be sold at a fraction of the price of the real drug while still making a profit. According to a 2011 report by the Stimson Center, the large profit margin combined with comparatively low penalties for manufacturing and selling counterfeit pharmaceuticals make counterfeiting drugs a popular revenue stream for organized crime, including global terrorist organizations.

Even though the incidence of drug counterfeiting is very hard to estimate, it is certainly a global problem. It is most prevalent in developing countries, where 10-30% of all medication sold may be fake, and less so in industrialized countries (below 1%), according to the CDC. In the summer of 2015, Interpol launched a coordinated campaign in 115 countries during which millions of counterfeit medicines with an estimated value of $81 million were seized, including everything from eye drops and tanning lotion to antidepressants and fertility drugs. The operation also shut down over 2400 websites and 550 adverts for illegal online pharmacies in an effort to combat online sales of illegal drugs.

There are several methods to help protect the integrity of pharmaceuticals, including tamper-evident packaging (e.g. blister packs) which can show customers if the packaging has been opened. However, the bigger problem lies in counterfeit pharmaceuticals making their way into the supply chain of drug companies. Tracking technology in the form of barcodes or RFID chips can establish a data trail that allows companies to follow each lot from manufacturer to pharmacy shelf, and as of 2013, tracking of pharmaceuticals throughout the supply chain is required as per the Drug Quality and Security Act. But this still does not necessarily let a customer know if the tablets they bought are fake or not.

Ingredients in a tablet or solution can fairly easily be identified by chromatography or spectroscopy. However, these methods require highly specialized, expensive equipment that most drug companies and research institutions have access to, but are not widely available in many parts of the world. To address this problem, researchers at the University Of Notre Dame have developed a very cool, low-tech method to quickly test drugs for their ingredients: A tablet is scratched across the paper, and the paper is then dipped in water. Various chemicals coated on the paper react with ingredients in the drug to form colors, resulting in a “color bar code” that can then be compared to known samples of filler materials commonly used in counterfeit drugs, as well as active pharmaceutical ingredients.

Recently, there have also been policy efforts to address the problem. The European Commission released their Falsified Medicines Directive in 2011 which established counterfeit medicines as a public health threat and called for stricter penalties for producing and selling counterfeit medicines. The directive also established a common logo to be displayed on websites, allowing customers to verify they are buying through a legitimate site. In the US, VIPPS accredits legitimate online pharmacies, and in May of this year, a bill calling for stricter penalties on the distribution and import of counterfeit medicine was introduced in Congress. In addition, there have also been various public awareness campaigns, for example, last year’s MHRA #FakeMeds campaign in the UK,  which was specifically focussed on diet pills sold online, and the FDA’s “BeSafeRx” programme, which offers resources to safely buying drugs online.

In spite of all the efforts to raise awareness and address the problem of fake drugs, a major complication remains: Generic drugs, as well as branded drugs, are often produced overseas and many are sold online, which saves cost and can bring the price of medication down, making it affordable to many people. The key will be to strike the balance between restricting access of counterfeiters to the supply chain while not restricting access to affordable, quality medication for patients who need them.

We need to talk about CRISPR

By Gesa Junge, PhD

You’ve probably heard of CRISPR, the magic new gene editing technique that will either ruin the world or save it, depending on what you read and whom you talk to? Or the Three Parent Baby, which scientists in the UK have created?

CRISPR is a technology based on a bacterial immune defense system which uses Cas9, a nuclease, to cut up foreign genetic material (e.g., viral RNA). Scientists have developed a method by which they can modify the recognition part of the system, the guide RNA, and make it specific to a site in the genome that Cas9 then cuts. This is often described as “gene editing” which allows disease-causing genes to be swapped out for healthy ones.

CRISPR is now so well known that Google finally stopped suggesting I may be looking for “crisps” instead, but the real-world applications are not so well worked out yet, and there are various issues around CRISPR, including off-target effects, and also the fact that deleting genes is much easier than replacing them with something else. But, after researchers at Oregon Health and Science University managed to change the mutated version of the MYBPC3 gene to the unmutated version in a viable human embryo last month, the predictable bioethical debate was reignited, and terms such as “Designer Babies” got thrown around a lot.

A similar thing happened with the “Three Parent Baby,” an unfortunate term coined to describe mitochondrial replacement therapy (MRT). Mitochondria, the cells’ organelles for providing energy, have their own DNA (making up about 0.2% of the total genome) which is separate from the genomic DNA in the nucleus, which is the body’s blueprint. Mitochondrial DNA can mutate just like genomic DNA, potentially leading to mitochondrial disease, which affects 1 in 5000-10000 children. Mitochondrial disease can manifest in various ways, ranging from growth defects to heart or kidney to disease to neuropsychological symptoms. Symptoms can range from very mild to very severe or fatal, and the disease is incurable.

MRT replaces the mutated mitochondrial DNA in a fertilized egg or in an embryo with the healthy version provided by a third donor, which allows the mitochondria to develop normally. The UK was the first country to allow the “cautious adaption” of this technique.

While headlines need to draw attention and engage the reader for obvious reasons, oversimplifications like “gene editing” and dramatic phrases like “three parent babies” can really get in the way of broadening the understanding of science, which is difficult enough as it is. Research is a slow and inefficient process that easily gets lost in a 24-hour news cycle, and often the context is complex and not easily summed up in 140 characters. And even when the audience can be engaged and interested, the relevant papers are probably hiding behind a paywall, making fact checking difficult.

Aside from difficulties communicating the technicalities and results of studies, there is also often a lack of context in presenting scientific studies – think for example of chocolate and red wine which may or may not protect from heart attacks. What is lost in many headlines is that scientific studies usually express their results as a change in risk of developing a disease, not a direct causation, and very few diseases are caused by one chemical or one food additive. On this topic, WNYC’s “On The Media”-team have an issue of their Breaking News Consumer Handbook that is very useful to evaluate health news.

The causation vs. correlation issue is perhaps a little easier to discuss than big ethical questions that involve changing the germline DNA of human beings because ethical questions do not usually have a scientific answer, let alone a right answer. This is a problem, not just for scientists, but for everyone, because innovation often moves out of the realm of established ethics, forcing us to re-evaluate it.

Both CRISPR and MRT are very powerful techniques that can alter a person’s DNA, and potentially the DNA of their children, which makes them both promising and scary. We are not ready to use CRISPR to cure all cancers yet, and “Three Parent Babies” are not designed by anyone, but unfortunately, it can be hard to look past Designer Babies, Killer Mutations and DNA Scissors, and have a constructive discussion about the real issues, which needs to happen! These technologies exist; they will improve and eventually, and inevitably, play a role in medicine. The question is, would we rather have this development happen in reasonably well-regulated environments where authorities are at least somewhat accountable to the public, or are we happy to let countries with more questionable human rights records and even more opaque power structures take the lead?

Scientists have a responsibility to make sure their work is used for the benefit of humanity, and part of that is taking the time to talk about what we do in terms that anyone can understand, and to clarify all potential implications (both positive and negative), so that there can be an informed public discussion, and hopefully a solution everyone can live with.

 

Further Reading:

CRISPR:

National Geographic

Washington Post

 

Mitochondrial Replacement Therapy:

A paper on clinical and ethical implications

New York Times (Op-Ed)

 

Halos on Mars

By JoEllen McBride, PhD

Curiosity Discovery Suggests Early Mars Environment Suitable for Life Longer Than Previously Thought.

 

We have been searching desperately for evidence of life on Mars since the first Viking lander touched down in 1976. So far we’ve come up empty-handed but a recent finding from the Curiosity rover has refueled scientists’ hopes.

 

NASA’s Curiosity rover is currently puttering along the Martian surface in Gale Crater. Its mission is to determine whether Mars ever had an environment suitable for life. The clays and by-products of reactions between water and sulfuric acid (a.k.a. sulfates) that fill the crater are evidence that it once held a lake that dried up early in the planet’s history. Using its suite of instruments, Curiosity is digging, sifting and burning the soil for clues to whether the wet environment of a young Mars could ever give rise to life.

 

On Tuesday, scientists announced that they discovered evidence that groundwater existed in Gale Crater long after the lake dried up. Curiosity noticed lighter colored rock surrounding fractures in the crater which scientists recognized as a tell-tale sign of groundwater. As water flows underground on Earth, oxygen atoms from the water combine with other minerals found in the rock. The newly-formed molecules are then transported by the flowing water and absorbed by the surrounding rock. This process creates ‘halos’ within the rock that often have different coloration and composition than the original rock.

 

Curiosity used its laser instrument to analyze the composition of the lighter colored rock in Gale Crater and reported that it was full of silicates. This particular region of the crater contains rock that was not present at the same time as the lake and does not contain the minerals necessary to produce silicates. So the only way these silicates could be present is if they were transported there from older rock. Using what they know about groundwater processes on Earth, NASA scientists determined that groundwater must have reacted with silicon present in older rock creating the silicates. These new minerals then flowed to the younger bedrock and seeped in resulting in the halos Curiosity discovered. The time it would take these halos to form provide strong evidence that groundwater persisted in Gale Crater much longer than previously thought.

 

Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.
Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.

This news also comes on the heels of the first discovery of boron by Curiosity on Mars. Boron on Earth is present in dried-up, non-acidic water beds. Finding boron on Mars suggests that the groundwater present in Gale Crater was most likely at a temperature and acidity suitable for microbial life. The combination of the longevity of groundwater and its acceptable acidity greatly increases the window for microbial life to form on young Mars.

 

These two discoveries have not only extended the time-frame for the habitability of early Mars but lead one to wonder where else groundwater was present on the planet. We hopefully won’t have to wait too long to find out. Curiosity is still going strong and NASA has already begun work on a new set of exploratory Martian robots. The next rover mission to Mars is set to launch in 2020 and will be equipped with a drill that will remove core samples of Martian soil. The samples will be stored on the planet for retrieval at a later date. What (or who) will be sent to pick up the samples is still being determined.

 

Although we haven’t found evidence for life on Mars, the hope remains. It appears Mars had the potential for life at the same time in its formation as Earth. We just have to continue looking for organic signatures in the Martian soil or determine what kept life from getting its start on the Red Planet.

 

HeLa, the VIP of cell lines

By  Gesa Junge, PhD

A month ago, The Immortal Life of Henrietta Lacks was released on HBO, an adaptation of Rebecca Skloot’s 2010 book of the same title. The book, and the movie, tell the story of Henrietta Lacks, the woman behind the first cell line ever generated, the famous HeLa cell line. From a biologist’s standpoint, this is a really unique thing, as we don’t usually know who is behind the cell lines we grow in the lab. Which, incidentally, is at the centre of the controversy around HeLa cells. HeLa was the first cell line ever made over 60 years ago and today a PubMed search for “HeLa” return 93274 search results.

Cell lines are an integral part to research in many fields, and these days there are probably thousands of cell lines. Usually, they are generated from patient samples which are immortalised and then can be grown in dishes, put under the microscope, frozen down, thawed and revived, have their DNA sequenced, their protein levels measured, be genetically modified, treated with drugs, and generally make biomedical research possible. As a general rule, work with cancer cell lines is an easy and cheap way to investigate biological concepts, test drugs and validate methods, mainly because cell lines are cheap compared to animal research, readily available, easy to grow, and there are few concerns around ethics and informed consent. This is because although they originate from patients, the cell lines are not considered living beings in the sense that they have feelings and lives and rights; they are for the most part considered research tools. This is an easy argument to make, as almost all cell lines are immortalised and therefore different from the original tissues patients donated, and most importantly they are anonymous, so that any data generated cannot be related back to the person.

But this is exactly what did not happen with HeLa cells. Henrietta Lack’s cells were taken without her knowledge nor consent after she was treated for cervical cancer at Johns Hopkins in 1951. At this point, nobody had managed to grow cells outside the human body, so when Henrietta Lack’s cells started to divide and grow, the researchers were excited, and yet nobody ever told her, or her family. Henrietta Lacks died of her cancer later that year, but her cells survived. For more on this, there is a great Radiolab episode that features interviews with the scientists, as well as Rebecca Skloot and Henrietta Lack’s youngest daughter Deborah Lacks Pullum.

In the 1970s, some researchers did reach out to the Lacks family, not because of ethical concerns or gratitude, but to request blood samples. This naturally led to confusion amongst family members around how Henrietta Lack’s cells could be alive, and be used in labs everywhere, even go to space, while Henrietta herself had been dead for twenty years. Nobody had told them, let alone explained the concept of cell lines to them.

The lack of consent and information are one side, but in addition to being an invaluable research tool, cell lines are also big business: The global market for cell lines development (which includes cell lines and the media they grow in, and other reagents) is worth around 3 billion dollars, and it’s growing fast. There are companies that specialise in making cell lines of certain genotypes that are sold for hundreds of dollars, and different cell types need different growth media and additives in order to grow. This adds a dimension of financial interest, and whether the family should share in the profit derived from research involving HeLa cells.

We have a lot to be grateful for to HeLa cells, and not just biomedical advances. The history of HeLa brought up a plethora of ethical issues around privacy, information, communication and consent that arguably were overdue for discussion. Innovation usually outruns ethics, but while nowadays informed consent is standard for all research involving humans, and patient data is anonymised (or at least pseudonomised and kept confidential), there were no such rules in 1951. There was also apparently no attempt to explain scientific concept and research to non-scientists.

And clearly we still have not fully grasped the issues at hand, as in 2013 researchers sequenced the HeLa cell genome – and published it. Again, without the family’s consent. The main argument in defence of publishing the HeLa genome was that the cell line was too different from the original cells to provide any information on Henrietta Lack’s living relatives. There may some truth in that; cell lines change a lot over time, but even after all these years there will still be information about Henrietta Lack’s and her family in there, and genetic information is still personal and should be kept private.

HeLa cells have gotten around to research labs around the world and even gone to space and on deep sea dives. And they are now even contaminating other cell lines (which could perhaps be interpreted as just karma). Sadly, the spotlight on Henrietta Lack’s life has sparked arguments amongst the family members around the use and distribution of profits and benefits from the book and movie, and the portrayal of Henrietta Lack’s in the story. Johns Hopkins say they have no rights to the cell line, and have not profited from them, and they have established symposiums, scholarships and awards in Henrietta Lack’s honour.

The NIH has established the HeLa Genome Data Access Working Group, which includes members of Henrietta Lack’s family. Any researcher wanting to use the HeLa cell genome in their research has to request the data from this committee, and explain their research plans, and any potential commercialisation. The data may only be used in biomedical research, not ancestry research, and no researcher is allowed to contact the Lacks family directly.

On Science and Values

 

By Rebecca Delker, PhD

 

In 1972 nuclear physicist Alvin Weinberg defined ‘trans-science’ as distinct from science (references here, here). Trans-science – a phenomenon that arises most frequently at the interface of science and society – includes questions that, as the name suggests, transcend science. They are questions, he says, “which can be asked of science and yet which cannot be answered by science.” While most of what concerned Weinberg were questions of scientific fact that could not (yet) be answerable by available methodologies, he also understood the limits of science when addressing questions of “moral and aesthetic judgments.” It is this latter category – the differentiation of scientific fact and value – that deserves attention in the highly political climate in which we now live.

Consider this example. In 2015 – 2016, action to increase the use of risk assessment algorithms in criminal sentencing received a lot of heat (and rightly so) from critics (references here, here). In an attempt to eliminate human bias from criminal justice decisions, many states rely on science in the form of risk assessment algorithms to guide decisions. Put simply, these algorithms build statistical models from population-level data covering a number of factors (e.g. gender, age, employment, etc.) to provide a probability of repeat offense for the individual in question. Until recently, the use of these algorithms has been restricted, but now states are considering expanding their utility for sentencing. What this fundamentally means is that a criminal’s sentence depends not only on the past and present, but also on a statistically derived prediction of future. While the intent may have been to reduce human bias, many argue that risk assessment algorithms achieve the opposite; and because the assessment is founded in data, it actually serves to generate a scientific rationalization of discrimination. This is because, while the data underpinning the statistical models does not include race, it requires factors (e.g. education level, socioeconomic background, neighborhood) that are, themselves, revealing of centuries of institutionalized bias. To use Weinberg’s terminology, this would fall into the first category of trans-science: the capabilities of the model fall short of capturing the complexity of race relations in this country.

But this is not the whole story. Even if we could build a model without the above-mentioned failings, there are still more fundamental ethical questions that need addressing. Is it morally correct to sentence a person for crimes not yet committed? And, perhaps even more crucial, does committing a crime warrant one to lose their right to be viewed (and treated) as an individual – a value US society holds with high regard – and instead be reduced to a trend line derived from the actions of others? It is these questions that fall into the second category of trans-science: questions of morality that science has no place in answering. When we turn to science to resolve such questions, however, we blind ourselves from the underlying, more complex terrain of values that make up the debate at hand. By default, and perhaps inadvertently, we grant science the authority to declare our values for us.

Many would argue that this is not a problem. In fact, in a 2010 TED talk neuroscientist Sam Harris claimed that “the separation between science and human values is an illusion.” Values, he says, “are a certain kind of fact,” and thus fit into the same domain as, and are demonstrable by, science. Science and morality become one in the same because values are facts specifically “about the well-being of conscious creatures,” and our moral duty is to maximize this well being.

The flaw in the argument (which many others have pointed out as well) is that rather than allowing science to empirically determine a value and moral code – as he argued it could – he presupposed it. That the well being of conscious creatures should be valued, and that our moral code should maximize this, cannot actually be demonstrated by science. I will also add that science can provide no definition for ‘well-being,’ nor has it yet – if it ever can – been able to provide answers to the questions of what consciousness is, and what creatures have it. Unless human intuition steps in, this shortcoming of science can lead to dangerous and immoral acts.

What science can do, however, is help us stay true to our values. This, I imagine, is what Harris intended. Scientific studies play an indispensable role in informing us if and when we have fallen short of our values, and in generating the tools (technology/therapeutics) that help us achieve these goals. To say that science has no role in the process of ethical decision-making is as foolish as relying entirely on science: we need both facts and values.

While Harris’ claims of the equivalency of fact and value may be more extreme than most would overtly state, they are telling of a growing trend in our society to turn to science to serve as the final arbiter of even the most challenging ethical questions. This is because in addition to the tangible effects science has had on our lives, it has also shaped the way we think about truth: instead of belief, we require evidenced-based proof. While this is a noble objective in the realm of science, it is a pathology in the realm of trans-science. This pathology stems from an increasing presence in our society of Scientism – the idea that science serves as the sole provider of knowledge.

But we live in the post-fact era. There is a war against science. Fact denial runs rampant through politics and media. There is not enough respect for facts and data. I agree with each of these points; but it is Scientism, ironically, that spawned this culture. Hear me out.

The ‘anti-science’ arguments – from anti-evolution to anti-vaccine to anti-GMO to climate change denial – never actually deny the authority of science. Rather, they attack scientific conclusions by either creating a pseudoscience (think: creationism), pointing to flawed and/or biased scientific reporting (think: hacked Climate data emails), clinging to scientific reports that demonstrate their arguments (think: the now debunked link between vaccines and autism), and by honing in on concerns answerable by science as opposed to others (think: the safety of GMOs). These approaches are not justifiable; nor are they rigorously scientific. What they are, though, is a demonstration that even the people fighting against science recognize that the only way to do so is by appealing to its authority. As ironic as it may be, fundamental to the anti-science argument is the acceptance that the only way to ‘win’ a debate is to either provide scientific evidence or to poke holes in the scientific evidence at play. Their science may be bad, but they are working from a foundation of Scientism.

 

Scientific truth has a role in each of the above debates, and in some cases – vaccine safety, for example – it is the primary concern; but too often scientific fact is treated as the only argument worth consideration. An example from conservative writer Yuval Levin illustrates this point. While I do not agree with Levin’s values regarding abortion, the topic at hand, his points are worth considering. Levin recounts that during a hearing in the House of Representatives regarding the use of the abortion drug RU-486, a DC delegate argued that because the FDA decided the drug was safe for women, the debate should be over. As Levin summarized, “once science has spoken … there is no longer any room for ‘personal beliefs’ drawing on non-scientific sources like philosophy, history, religion, or morality to guide policy.”

When we break down the abortion debate – as well as most other political debates – we realize that it is composed of matters of both fact and value. The safety of the drug (or procedure) is of utmost importance and can, as discussed above, be determined by science; this is a fact. But, at the heart of the debate is a question of when human life begins – something that science can provide no clarity on. To use scientific fact as a façade for a value system that accepts abortion is as unfair as denying the scientific fact of human-caused climate change: both attempts focus on the science (by either using or attacking) in an effort to thwart a discussion that encompasses both the facts of the debate and the underlying terrain of values. We so crave absolute certainty that we reduce complex, nuanced issues to questions of scientific fact – a tendency that is ultimately damaging to both social progress and society’s respect for science.

By assuming that science is the sole provider of truth, our culture has so thoroughly blurred the line between science and trans-science that scientific fact and value are nearly interchangeable. Science is misused to assert a value system; and a value system is misused to selectively accept or deny scientific fact. To get ourselves out of this hole requires that we heed the advice of Weinberg: part of our duty as scientists is to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” Greater respect for facts may paradoxically come from a greater respect for values – or at the very least, allowing space in the conversation for them.

 

How Science Trumps Trump: The Future of US Science Funding

 

By Johannes Buheitel, PhD

I was never the best car passenger. It’s not that I can’t trust others but there is something quite unsettling about letting someone else do the steering, while not having any power over the situation yourself. On Tuesday, November 8th, I had exactly this feeling, but all I could do was to sit back and let it play out on my TV set. Of course, you all know by now, I’m talking about the past presidential election, in which the American people (this excludes me) were tasked with casting their ballots in support for either former First Lady and Secretary of State Hillary Clinton or real estate mogul and former reality TV personality Donald Trump. And for all that are bit behind on their Twitter feed (spoiler alert!): Donald Trump will be the 45th president of the United States of America following his inauguration on January 20th, 2017. Given the controversies around Trump and all the issues he stands for, there are many things that can, have been  and will be said about the implications for people living in the US but also elsewhere. But for us scientists, the most pressing question that is being asked left and right is an almost existential one: What happens to science and its funding in the US?

The short answer is: We don’t know yet. Not only has there been no meaningful discussion about these issues in public (one of the few exceptions being that energy policy question  by undecided voter-turned-meme Ken Bone), but, even more worryingly, there is just not enough hard info on specific policies from the future Trump administration to go on. And that means, we’re left to just make assumptions based on the handful of words Mr. Trump and his allies have shared during his campaign. And I’m afraid, those paint a dire picture of the future of American science.

Trump has not only repeatedly mentioned in the past that he did not believe in the scientific evidence around climate change (even going as far as calling it a Chinese hoax), but also reminded us of his position just recently, when he appointed  known climate change skeptic Myron Ebell to the transition team of the Environmental Protection Agency (EPA). He has furthermore endorsed the widespread (and, of course misguided) belief that vaccines cause autism. His vice president, Mike Pence, publicly doubted  that smoking can cause cancer as late as in 2000, and called evolution “controversial”.

According to specialists like Michael Lubell from the American Physical Society, all of these statements are evidence that “Trump will be the first anti-science president we have ever had.” But what does this mean for us in the trenches? The first thing you should know is that science funding is more or less a function of the overall US discretionary budget, which is in the hand of  the United States Congress, says  Matt Hourihan, director of the R&D Budget and Policy Program for the American Association for the Advancement of Science (AAAS). This would be a relief, if Congress wasn’t, according to Rush Holt, president of the AAAS, on a “sequestration path that […] will reduce the fraction of the budget for discretionary funding.” In numbers, this means that when the current budget deal expires next year, spending caps might drop by another 2.3%. Holt goes on to say that a reversal of this trend has always been unlikely, even if the tables were turned, which doesn’t make the pill go down any easier. Congress might raise the caps, as they have done before, but this is of course not a safe bet, and could translate to a tight year for US science funding.

So when the budget is more or less out of the hands of Donald Trump, what power does he actually possess over matters of research funding? Well, the most powerful political instrument that the president can implement is the executive order. But also this power is not unlimited and could for example not be used to unilaterally reverse the fundamentals of climate policy, said David Goldston from the Natural Resources Defense Council (NRDC) during a Webinar hosted by the AAAS shortly after the election. Particularly, backing out of the Paris agreement, as Trump has threatened to do, would take at least four years and requires support by Congress (which, admittedly, is in Republican hand). And while the president might be able to “scoop out” the Paris deal by many smaller changes to US climate policy, this is unlikely to happen, at least not to a substantial degree, believes Rush Holt. The administration will soon start to feel push-back by the public, which, so Holt during the AAAS Webinar, is indeed not oblivious about the various impacts of climate change, like frequent droughts or the decline of fisheries in the country. There was further consensus among the panelists that science education funding will probably not be deeply affected. First, because this matter usually has bipartisan support, but also because only about 10% of the states’ education funding actually comes from the federal budget.

So, across the board, experts seem to be a reluctantly positive. Whether this is just a serious case of denial or panic control, we don’t know, but even Trump himself has been caught calling for  “investment in research and development across a broad landscape of academia,” and even seems to be a fan of space exploration. Our job as scientists is now, to keep our heads high, keep doing our research to the best of our abilities but also to keep reaching out to the public, invite people to be part of the conversation, and convincing them of the power of scientific evidence. Or to say it with Rush Holt’s words: “We must make clear that an official cannot wish away what is known about climate change, gun violence, opioid addiction, fisheries depletion, or any other public issue illuminated by research.”

 

The Danger of Absolutes in Science Communication

 

By Rebecca Delker, PhD

Complementarity, born out of quantum theory, is the idea that two different ways of looking at reality can both be true, although not at the same time. In other words, the opposite of a truth is not necessarily a falsehood. The most well known example of this in the physical world is light, which can be both a particle and a wave depending on how we measure it. Fundamentally, this principle allows for, and even encourages, the presence of multiple perspectives to gain knowledge.

 

This is something I found myself thinking about as I witnessed the twitter feud-turned blog post-turned actual news story (and here) centered around the factuality of physician-scientist Siddhartha Mukherjee’s essay, “Same but Different,” published recently in The New Yorker. Weaving personal stories of his mother and her identical twin sister with experimental evidence, Mukherjee presents the influence of the epigenome – the modifications overlaying the genome – in regulating gene expression. From this perspective, the genome encodes the set of all possible phenotypes, while the epigenome shrinks this set down to one. At the cellular level – where much of the evidence for the influence of epigenetic marks resides – this is demonstrated by the phenomenon that a single genome encodes for the vastly different phenotypes of cells in a multicellular organism. A neuron is different from a lymphocyte, which is different from a skin cell not because their genomes differ but because their transcriptomes (the complete set of genes expressed at any given time) differ. Epigenetic marks play a role here.

 

While many have problems with the buzzword status of epigenetics and the use of the phrase to explain away the many unknowns in biology (here, here), the central critique of Mukherjee’s essay was the extent to which he emphasized the role of epigenetic mechanisms in gene regulation over other well-characterized players, namely transcription factors – DNA binding proteins that are undeniably critical for gene expression. However, debating whether the well-studied transcription factors or the less well-established epigenetic marks are more important is no different than the classic chicken or egg scenario: impossible to assign order in a hierarchy, let alone separate from one another.

 

But whether we embrace epigenetics in all of its glory or we couch the term in quotation marks – “epigenetics” – in an attempt to dilute its impact, it is still worth pausing to dissect why a public exchange brimming with such negativity occurred in the first place.
“Humans are a strange lot,” remarked primatologist Frans de Waal. “We have the power to analyze and explore the world around us, yet panic as soon as evidence threatens to violate our expectations” (de Waal, 2016, p.113). This inclination is evident in the above debate, but it also hints at a more ubiquitous theme of the presence of bias stemming from one’s group identity. Though de Waal deals with expectations that cross species lines, even within our own species, group identity plays a powerful role in dictating relationships and guiding one’s perspective on controversial issues. Studies have shown that political identities, for example, can supplant information during decision-making. Pew Surveys reveal that views on the issue of climate change divide sharply along partisan lines. When asked whether humans are at fault for changing climate patterns, a much larger percentage of democrats (66%) than republicans (24%) answered yes; however, when asked what the main contributor of climate change is (CO2), these two groups converged (democrats: 56%, republicans: 58%; taken from Field Notes From a Catastrophe, p. 199-200). This illustrates the potential for a divide between one’s objective understanding of an issue and one’s subjective position on that issue – the latter greatly influenced by the prevailing opinion of their allied group.

 

Along with group identity is the tendency to eschew uncertainty and nuance, choosing solid footing no matter how shaky the turf, effectively demolishing the middle ground. This tendency has grown stronger in recent years, it seems, likely in response to an increase in the sheer amount of information available. This increased complexity, while important in allowing access to numerous perspectives on an issue, also triggers our innate response to minimize cost during decision-making by taking “cognitive shortcuts” and receiving cues from trusted authorities, including news outlets. This is exacerbated by the rise in the use of social media and shrinking attention spans, which quench our taste for nuance in favor of extremes. The constant awareness of one’s (online) identity in relation to that of a larger group encourages consolidation around these extremes. The result is the transformation of ideas into ideologies and the polarization of the people involved.

 

These phenomena are evident in the response to Mukherjee’s New Yorker article, but they can be spotted in many other areas of scientific discourse. This, unfortunately, is due in large part to a culture that rewards results, promotes an I-know-the-answer mentality, and encourages its members to adopt a binary vision of the world where there is a right and a wrong answer. Those who critiqued Mukherjee for placing too great an emphasis on the role of epigenetic mechanisms responded by placing the emphasis on transcription factors, trivializing the role of epigenetics. What got lost in this battle of extremes was a discussion of the complementary nature of both sets of discoveries – a discussion that would bridge, rather than divide, generations and perspectives.

 

While intra-academic squabbles are unproductive, the real danger of arguments fought in absolutes and along group identity lines lays at the interface of science and society. The world we live in is fraught with complex problems, and Science, humanity’s vessel of ingenuity, is called upon to provide clean, definitive solutions. This is an impossible task in many instances as important global challenges are not purely scientific in nature. They each contain a very deep human element. Political, historical, religious, and cultural views act as filters through which information is perceived and function to guide one’s stance on complex issues. When these issues include a scientific angle, confidence in the institution of science as an (trustworthy) authority plays a huge role.

 

One of the most divisive of such issues is that of genetically modified crops (GMOs). GMOs are crops produced by the introduction or modification of DNA sequence to incorporate a new trait or alter an existing trait. While the debate spans concerns about the safety of GMOs for human health and environmental health to economic concerns over the potential disparate benefits to large agribusiness and small farmers, these details are lost in the conversation. Instead, the debate is reduced to a binary: pro-GMO equals pro-science, anti-GMO equals anti-science. Again, the group to which one identifies, scientists included, plays a tremendous role in determining one’s stance on the issue. Polling public opinion reveals a similar pattern to that of climate change. Even though awareness of genetic engineering in crops has remained constantly low over the years, beliefs that GMOs pose a serious health hazard have increased. What’s worse, these debates treat all GMO crops the same simply because they are produced with the same methodology. While the opposition maintains a blanket disapproval of all engineered crops, the proponents don’t fare better, responding with indiscriminate approval.

 

Last month The National Academy of Sciences released a comprehensive, 420-page report addressing concerns about GMOs and presenting an analysis of two-decades of research on the subject. While the conclusions drawn largely support the idea that GMOs pose no significant danger for human and environmental health, the authors make certain to address the caveats associated with these conclusions. Though prompted by many to provide the public with “a simple, general, authoritative answer about GE (GMO) crops,” the committee refused to participate in “popular binary arguments.” As important as the scientific analysis is this element of the report, which serves to push the scientific community away from a culture of absolutes. While the evidence at hand shows no cause-and-effect relationship between GMOs and human health problems, for example, our ability to assess this is limited to short-term effects, as well as by our current ability to know what to look for and to develop assays to do so. The presence of these unknowns is a reality in all scientific research and to ignore them, especially with regard to complex societal issues, only serves to strengthen the growing mistrust of science in our community and broaden the divide between people with differing opinions. As one review of the report states, “trust is not built on sweeping decrees.”

 

GMO crops, though, is only one of many issues of this sort; climate change and vaccine safety, for example, have been similarly fraught. And, unfortunately, our world is promising to get a whole lot more complicated. With the reduced cost of high-throughput DNA sequencing and the relative ease of genome editing, it is becoming possible to modify not just crops, but farmed animals, as well as the wild flora and fauna that we share this planet with. Like the other issues discussed, these are not purely scientific problems. In fact, the rapid rate at which technology is developing creates a scenario in which the science is the easy part; understanding the consequences and the ethics of our actions yields the complications. This is exemplified by the potential use of CRISPR-driven gene drives to eradicate mosquito species that serve as vectors for devastating diseases (malaria, dengue, zika, for example). In 2015, 214 million people were affected by malaria and, of those, approximately half a million died. It is a moral imperative to address this problem, and gene drives (or other genome modification techniques) may be the best solution at this time. But, the situation is much more complex than here-today, gone-tomorrow. For starters, the rise in the prevalence of mosquito-borne diseases has its own complex portfolio, likely involving climate change and human-caused habitat destruction and deforestation. With limited understanding of the interconnectedness of ecosystems, it is challenging to predict the effects of mosquito specicide on the environment or on the rise of new vectors of human disease. And, finally, this issue raises questions of the role of humans on this planet and the ethics of modifying the world around us. The fact is that we are operating within a space replete with unknowns and the path forward is not to ignore these nuances or to approach these problems with an absolutist’s mindset. This only encourages an equal and opposite reaction in others and obliterates all hope of collective insight.

 

It is becoming ever more common for us to run away from uncertainty and nuance in search of simple truths. It is within the shelter of each of our groups and within the language of absolutes that we convince ourselves these truths can be found; but this is a misconception. Just as embracing complementarity in our understanding of the physical world can lead to greater insight, an awareness that no single approach can necessarily answer our world’s most pressing problems can actually push science and progress forward. When thinking about the relationship of science with society, gaining trust is certainly important but not the only consideration. It is also about cultivating an understanding that in the complex world in which we live there can exist multiple, mutually incompatible truths. It is our job as scientists and as citizens of the world to navigate toward, rather than away from, this terrain to gain a richer understanding of problems and thus best be able to provide a solution. Borrowing the words of physicist Frank Wilczek, “Complementarity is both a feature of physical reality and a lesson in wisdom.”

 

Measuring the Value of Science: Keeping Bias out of NIH Grant Review

 

By Rebecca Delker, PhD

Measuring the value of science has always been – and, likely, will always remain – a challenge. However, this task, with regard to federal funding via grants, has become increasingly more daunting as the number of biomedical researchers has grown substantially and the available funds contracted. As a result of this anti-correlation, funding rates for NIH grants, most notably, the R01, have dropped precipitously. The most troubling consequences of the current funding environment are (1) the concentration of government funds in the hands of older, established investigators at the cost of young researchers, (2) a shift in the focus of lab-heads toward securing sufficient funds to conduct research, rather than the research itself and (3) an expectation for substantial output, increasing the demands for preliminary experiments and discouraging the proposal of high-risk, high-reward projects. The federal grant system has a direct impact on how science is conducted and, in its current form, restricts intellectual freedom and creativity, promoting instead guaranteed, but incremental, scientific progress.

 

History has taught us that hindsight is the only reliable means of judging the importance of science. It was sixteen years after the death of Gregor Mendel – and thirty-five years after his seminal publication – before researchers acknowledged his work on genetic inheritance. The rapid advance of HIV research in the 1980s was made possible by years of retroviral research that occurred decades prior. Thus, to know the value of research prior, or even a handful of years after publication, is extremely difficult, if not impossible. Nonetheless, science is an innately forward-thinking endeavor and, as a nation, we must do our best to fairly distribute available government funds to the most promising research endeavors, while ensuring that creativity is not stifled. At the heart of this task lies a much more fundamental question – what is the best way to predict the value of scientific research?

 

In a paper published last month in Cell, Ronald Germain joins the conversation of grant reform and tackles this question by proposing a new NIH funding system that shifts the focus from project-oriented to investigator-oriented grants. He builds his new system on the notion that the track record of a scientist is the best predictor of future success and research value. By switching to a granting mechanism similar to privately funded groups like the HHMI, he asserts, the government can distribute funds more evenly, as well as free up time and space for creativity in research. Under the new plan, funding for new investigators would be directly tied to securing a faculty position by providing universities “block grants,” which are distributed to new hires. In parallel, individual grants for established investigators would be merged into one (or a few) grant(s), covering a wider range of research avenues. For both new and established investigators, the funding cycle would be increased to 5-7 years and – the most significant departure from the current system – grant renewal dependent primarily on a retrospective analysis of work completed during the prior years. The foundation for the proposed granting system relies on the assumption that past performance, with regard to output, predicts future performance. As Germain remarks, most established lab-heads trust a CV over a grant proposal when making funding decisions; but it is exactly this component of the proposal – of our current academic culture – that warrants a more in-depth discussion.

 

Germain is not the first to call into question the reliability of current NIH peer reviews. As he points out, funding decisions for project-oriented grants are greatly influenced by the inclusion of considerable preliminary data, as well as form and structure over content. Others go further and argue that the peer review process is only capable of weeding out bad proposals, but fails at accurately ranking the good. This conclusion is supported by studies, which establish a correlation between prior publication, not peer review score, and research outcome. (It should be noted that a recent study following the outcomes of greater than 100,000 funded R01 grants found that peer review scores are predictive of grant outcome, even when controlling for the effects of institute and investigator. The contradictory results of these two studies cannot yet be explained, though anecdotal evidence falls heavily in support of the former conclusions.)

 

Publication decisions are not without biases. Journals are businesses and, as such, benefit from publishing headline-grabbing science, creating an unintended bias against less trendy, but high quality, work. The more prestigious the journal, the higher its impact factor, the more this pressure seems to come into play. Further, just as there is a necessary skill set associated with successful grant writing that goes beyond the scientific ideas, publication success depends on more factors than the research itself. An element of “story-telling” can make research much more appealing; and human perception of the work during peer review can easily be influenced by name recognition of the investigator and/or institute. I think it is time to ask ourselves if past publication record is truly predictive of future potential, or, if it simply eases the way to additional papers.

 

In our modern academic culture, the quality of research and of scientists is often judged by quantitative measures that, at times, can mask true potential. Productivity, as measured by the number of papers published in a given period of time, is a standard gaining momentum in recent years to serve as a meaningful evaluation of the quality of a scientist. As Germain states, a “highly competent investigator” is unlikely “to fail to produce enough … to warrant a ‘passing grade’.” The interchangeability of competence and output has been taken to such extremes that pioneering physicist and Nobel Prize winner, Peter Higgs, has publicly stated that he would be overlooked in current academia because of the requirement to “keep churning out papers.” The demand for rapid productivity and high impact factor has caused an increase in the publication of poorly validated findings, as well as in retraction rates due to scientific misconduct. The metrics used currently to value science are just as, if not more, dangerous to the progress of science as the restrictions placed on research by current funding mechanisms.

 

I certainly do not have a fail-proof plan to fix the current funding problems; I don’t think anyone does. But, I do think that we need to look at grant reform in the context of the larger issues plaguing biomedical sciences. As a group of people who have chosen a line of work founded in doing/discovering/inventing the impossible, we have taken the easy way out when approached with measuring the value of research. Without the aid of hindsight, this task will never be objective and assigning quantitative measures like impact factor, productivity, and the h-index has proven only to generate greater bias in the system. We must embrace the subjectivity present in our review of scientific ideas while remaining careful not to vandalize scientific progress with bias. Measures to bring greater anonymity to the grant review process and greater emphasis on qualitative and descriptive assessments of past work and future ideas will help lessen the influence of human bias and make funding more fair. As our culture stands, a retrospective review process, as Germain proposes, with a focus on output runs the risk of adopting into the grant review process our flawed, and highly politicized, methods of judging the quality of science. I caution that in parallel to grant reform, we begin to initiate change in the metrics we use to measure the value of science.

 

Though NIH funding-related problems and the other systemic flaws of our culture seem at an all time high right now, the number of publications addressing these issues has also increased, especially in recent years. Now, more than ever, scientists at all stages recognize the immediacy of the problems and are engaging in conversations both in-person and online to brainstorm potential solutions. A new website  serves as a forum for all interested to join the discussion and contribute reform ideas – grant, or otherwise. With enough ideas and pilot experiments from the NIH we can ensure that the best science is funded and conducted. Onward and upward!

 

Open access: the future of science publishing?

By Florence Chaverneff

On the eve of receiving the Nobel Prize in Physiology or Medicine in 2013, Randy Schekman shook the scientific world in an altogether different manner when he announced in the Guardian newspaper he and his group would boycott the three leading scientific journals. These bastions of scientific publishing have long been held on a pedestal by the research community the world over and regarded as depositories of excellence in science. Their reputation is tightly associated with high ‘impact factors’, a parameter determined by article citations, and which Schekman judges to be a “gimmick” and a “deeply flawed measure, pursuing which has become an end in itself – and is damaging to science”. Yet, career advancement in academic research is heavily – if not exclusively– reliant on individuals getting their work published in these high impact scientific journals, which Schekman calls “luxury journals”, comparing them to bonuses common on Wall Street, and from which “science must break [away] “. He deems that “the result [of such a change] will be better research that better serves science and society”. The Nobel Prize awardee touts the open access model for scientific publishing, presenting it as all-around anti-elitist, which…it is.

 

In 2001, the Budapest Open Access Initiative defined open access for peer-reviewed journal articles by its “free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself”.

 

This is how open access makes for a more level playing field: by allowing immediate dissemination of scientific findings without restrictions, and by accepting articles without highly demanding criteria, while maintaining sound peer-review practices. This comes in sharp contrast to the 300 year old model of subscription-based scientific publishing, accepting limited numbers of articles in each issue, and requiring exceedingly demanding standards for acceptance. This results in significant publication delays and considerable time effort spent polishing articles for publication. Time which could be spent… doing research.

 

While many in the community will agree on the benefits granted by this still recent and evolving model of science publishing, open access journals, being less established than older household names, and lacking in their majority an impact factor, may not appear as prime choice for researchers. The question then can be posed: what would it take to bring about a shift in attitudes where open access publishing would be favored? Granting agencies and academic institutions, which contribute to setting the standards for scientific excellence need to start being more accepting of non-traditional models of scientific publications, and judge on quality of research, and not solely on journal impact factor. National policies encouraging open access publishing are also paramount to support such a shift. Moves in that direction are underway in the UK with a policy formulated by the Research Councils, and in the European Union with the Horizon 2020 Open Research Data Pilot project, OpenAire. In the US, the Fair Access to Science and Technology Research Act and the Public Access to Public Science Act aiming “to ensure public access to published materials concerning scientific research and development activities funded by Federal science agencies”, if passed, would be a step in the right direction.  All else that is needed might be a little time.

 

Should Systematic Review be a Bigger Part of Science?

 

 

By Celine Cammarata

For years, groups such as the Cochrane Collaborative and the Campbell Collaboration have worked to support and promote systematic review of medical and social policy research, respectively. These reviews can then help decision-makers and practitioners on the ground – doctors, public health officials, policy developers, etc. – to make scientifically based choices without having to wade through hundreds of journal articles and sort the diverse fragments of evidence provided. In a Lancet editorial last November, authors Iain Chalmers and Magne Nylenna expounded on how systematic reviews are critical for those within science as well, particularly in the development of new research. Given these lines of reasoning, should we as scientists try to elevate systematic review to a more esteemed position in the world of research?

Systematic reviews differ from traditional narrative-style reviews in several ways. Traditional reviews generally walk readers through the current state of a field and provide qualitative descriptions of the most relevant past work. In contrast, systematic reviews seeks to answer a specific research question, lays out a priori criteria to determine which studies will and will not be included in the review, uses these criteria to find all matching work (as much as possible), and combines all this evidence to answer the question, often by way of a meta-analysis.

Chalmers and Nylenna argued that many scientists fail to systematically build future work upon a thorough evaluation of past evidence. This, the authors believe, is problematic both ethically and economically, as it can lead to unnecessary duplication of work, continued research on a question that has already been answered, and waste of research animals and funding (see the Evidence Based Research Network site for more on research waste). Moreover, research synthesis as supported by Cochrane and Campbell helps package existing scientific findings into something that practitioners can use, thus greatly facilitating translational research – one of science’s hottest buzzwords, and with good reason. On the flip side, as Chalmers and Nylenna argue, if a field does not actively synthesize it’s findings, this can cause inefficiency in answering overall research questions that can have significant consequences if the issue at hand has important health implications.

I think there are many reasons large-scale research synthesis is currently less-than-appealing to scientists. On the production side, preparing a systematic review can be extremely time consuming, and generally offers little career reward. On the usage side, some researcher may not consider a systematic review necessary or even preferable as a basis for future work – they may feel that less systematic means are actually better suited to the situation, for instance if they have less confidence in some findings than others based on personal knowledge about the study’s execution. Additionally, investigators may consider narrative reviews to be a sufficient to basis for future studies even if these reviews do not employ meta-analysis, for instance if such narrative reviews were authored by leaders in the field whose expertise and scientific judgment is respected.

What would it look like to put research synthesis in a position of greater prominence? For one thing, as mentioned above, contributing to reviews would likely have to be incentivized if investigators are to be enticed away from their busy schedules, so this would constitute a change in the current academic reward structure. In addition, if scientists saw research synthesis as more valuable than individual high-priority papers, this might both necessitate and foster a more collaborative attitude. Doing research with the explicit goal of making it usable to those who will build off it and filling specific holes in the current body of knowledge may drive very different experiments than does a goal of producing exciting, flashy papers (obviously this is not an either-or situation – in fact I think the vast majority of scientists work somewhere in the middle of the spectrum between these poles).

One step in this direction might be the growing movement of data sharing. Another might be greater coordination within a field about methodology and research questions, which could streamline synthesis. For example, a recent Campbell review on Cognitive-Behavioral Therapy found that of 394 potentially relevant studies, only 7 were ultimately eligible for inclusion in the review, indicating that many investigators either used insufficiently rigorous methodology, fell short of fully reporting data, or prioritized different design aspects than those review authors needed to address the question at hand.

 

Should these changes be made? To me, this remains somewhat opaque. Arguments such as Chalmers and Nylenna’s are strong, and a focus on synthesis could come hand-in-hand with some refreshing changes in how science is done. But systematic review is not the only tool in the toolbox. For now, it remains a choice each scientist will have to make for her or himself.