The Fake Drug Problem

 

By Gesa Junge, PhD

Tablets, injections, and drops are convenient ways to administer life-saving medicine – but there is no way to tell what’s in them just by looking, and that makes drugs relatively easy to counterfeit. Counterfeit drugs are medicines that contain the wrong amount or type of active ingredient (the vast majority of cases), are sold in fraudulent packaging, or are contaminated with harmful substances. A very important distinction here: counterfeit drugs do not equal generic drugs. Generic drugs contain the same type and dose of active ingredient as a branded product and have undergone clinical trials, and they, too, can be counterfeited. In fact, counterfeiting can affect any drug, and although the main targets, particularly in Europe and North America, have historically been “lifestyle drugs” such as Viagra and weight loss products, fake versions of cancer drugs, antidepressants, anti-Malaria drugs and even medical devices are increasingly reported.

The consequences of counterfeit medicines can be fatal, for example, due to toxic contaminants in medicines, or inactive drugs used to treat life-threatening conditions. According to a BBC article, over 100,000 people die each year due to ineffective malaria medicines, and overall, Interpol puts the number of deaths due to counterfeit pharmaceuticals at up to a million per year. There are also other public health implications: Antibiotics in too low doses may not help a patient fight an infection, but they can be sufficient to induce resistance in bacteria, and counterfeit painkillers containing fentanyl, a powerful opioid, are a major contributor to the opioid crisis, according to the DEA.

It seems nearly impossible to accurately quantify the global market for counterfeit pharmaceuticals, but it may be as much as $200bn, or possibly over $400bn. The profit margin of fake drugs is huge because the expensive part of a drug is the active ingredient, which can relatively easily be replaced with cheap, innate material. These inactive pills can then be sold at a fraction of the price of the real drug while still making a profit. According to a 2011 report by the Stimson Center, the large profit margin combined with comparatively low penalties for manufacturing and selling counterfeit pharmaceuticals make counterfeiting drugs a popular revenue stream for organized crime, including global terrorist organizations.

Even though the incidence of drug counterfeiting is very hard to estimate, it is certainly a global problem. It is most prevalent in developing countries, where 10-30% of all medication sold may be fake, and less so in industrialized countries (below 1%), according to the CDC. In the summer of 2015, Interpol launched a coordinated campaign in 115 countries during which millions of counterfeit medicines with an estimated value of $81 million were seized, including everything from eye drops and tanning lotion to antidepressants and fertility drugs. The operation also shut down over 2400 websites and 550 adverts for illegal online pharmacies in an effort to combat online sales of illegal drugs.

There are several methods to help protect the integrity of pharmaceuticals, including tamper-evident packaging (e.g. blister packs) which can show customers if the packaging has been opened. However, the bigger problem lies in counterfeit pharmaceuticals making their way into the supply chain of drug companies. Tracking technology in the form of barcodes or RFID chips can establish a data trail that allows companies to follow each lot from manufacturer to pharmacy shelf, and as of 2013, tracking of pharmaceuticals throughout the supply chain is required as per the Drug Quality and Security Act. But this still does not necessarily let a customer know if the tablets they bought are fake or not.

Ingredients in a tablet or solution can fairly easily be identified by chromatography or spectroscopy. However, these methods require highly specialized, expensive equipment that most drug companies and research institutions have access to, but are not widely available in many parts of the world. To address this problem, researchers at the University Of Notre Dame have developed a very cool, low-tech method to quickly test drugs for their ingredients: A tablet is scratched across the paper, and the paper is then dipped in water. Various chemicals coated on the paper react with ingredients in the drug to form colors, resulting in a “color bar code” that can then be compared to known samples of filler materials commonly used in counterfeit drugs, as well as active pharmaceutical ingredients.

Recently, there have also been policy efforts to address the problem. The European Commission released their Falsified Medicines Directive in 2011 which established counterfeit medicines as a public health threat and called for stricter penalties for producing and selling counterfeit medicines. The directive also established a common logo to be displayed on websites, allowing customers to verify they are buying through a legitimate site. In the US, VIPPS accredits legitimate online pharmacies, and in May of this year, a bill calling for stricter penalties on the distribution and import of counterfeit medicine was introduced in Congress. In addition, there have also been various public awareness campaigns, for example, last year’s MHRA #FakeMeds campaign in the UK,  which was specifically focussed on diet pills sold online, and the FDA’s “BeSafeRx” programme, which offers resources to safely buying drugs online.

In spite of all the efforts to raise awareness and address the problem of fake drugs, a major complication remains: Generic drugs, as well as branded drugs, are often produced overseas and many are sold online, which saves cost and can bring the price of medication down, making it affordable to many people. The key will be to strike the balance between restricting access of counterfeiters to the supply chain while not restricting access to affordable, quality medication for patients who need them.

HeLa, the VIP of cell lines

By  Gesa Junge, PhD

A month ago, The Immortal Life of Henrietta Lacks was released on HBO, an adaptation of Rebecca Skloot’s 2010 book of the same title. The book, and the movie, tell the story of Henrietta Lacks, the woman behind the first cell line ever generated, the famous HeLa cell line. From a biologist’s standpoint, this is a really unique thing, as we don’t usually know who is behind the cell lines we grow in the lab. Which, incidentally, is at the centre of the controversy around HeLa cells. HeLa was the first cell line ever made over 60 years ago and today a PubMed search for “HeLa” return 93274 search results.

Cell lines are an integral part to research in many fields, and these days there are probably thousands of cell lines. Usually, they are generated from patient samples which are immortalised and then can be grown in dishes, put under the microscope, frozen down, thawed and revived, have their DNA sequenced, their protein levels measured, be genetically modified, treated with drugs, and generally make biomedical research possible. As a general rule, work with cancer cell lines is an easy and cheap way to investigate biological concepts, test drugs and validate methods, mainly because cell lines are cheap compared to animal research, readily available, easy to grow, and there are few concerns around ethics and informed consent. This is because although they originate from patients, the cell lines are not considered living beings in the sense that they have feelings and lives and rights; they are for the most part considered research tools. This is an easy argument to make, as almost all cell lines are immortalised and therefore different from the original tissues patients donated, and most importantly they are anonymous, so that any data generated cannot be related back to the person.

But this is exactly what did not happen with HeLa cells. Henrietta Lack’s cells were taken without her knowledge nor consent after she was treated for cervical cancer at Johns Hopkins in 1951. At this point, nobody had managed to grow cells outside the human body, so when Henrietta Lack’s cells started to divide and grow, the researchers were excited, and yet nobody ever told her, or her family. Henrietta Lacks died of her cancer later that year, but her cells survived. For more on this, there is a great Radiolab episode that features interviews with the scientists, as well as Rebecca Skloot and Henrietta Lack’s youngest daughter Deborah Lacks Pullum.

In the 1970s, some researchers did reach out to the Lacks family, not because of ethical concerns or gratitude, but to request blood samples. This naturally led to confusion amongst family members around how Henrietta Lack’s cells could be alive, and be used in labs everywhere, even go to space, while Henrietta herself had been dead for twenty years. Nobody had told them, let alone explained the concept of cell lines to them.

The lack of consent and information are one side, but in addition to being an invaluable research tool, cell lines are also big business: The global market for cell lines development (which includes cell lines and the media they grow in, and other reagents) is worth around 3 billion dollars, and it’s growing fast. There are companies that specialise in making cell lines of certain genotypes that are sold for hundreds of dollars, and different cell types need different growth media and additives in order to grow. This adds a dimension of financial interest, and whether the family should share in the profit derived from research involving HeLa cells.

We have a lot to be grateful for to HeLa cells, and not just biomedical advances. The history of HeLa brought up a plethora of ethical issues around privacy, information, communication and consent that arguably were overdue for discussion. Innovation usually outruns ethics, but while nowadays informed consent is standard for all research involving humans, and patient data is anonymised (or at least pseudonomised and kept confidential), there were no such rules in 1951. There was also apparently no attempt to explain scientific concept and research to non-scientists.

And clearly we still have not fully grasped the issues at hand, as in 2013 researchers sequenced the HeLa cell genome – and published it. Again, without the family’s consent. The main argument in defence of publishing the HeLa genome was that the cell line was too different from the original cells to provide any information on Henrietta Lack’s living relatives. There may some truth in that; cell lines change a lot over time, but even after all these years there will still be information about Henrietta Lack’s and her family in there, and genetic information is still personal and should be kept private.

HeLa cells have gotten around to research labs around the world and even gone to space and on deep sea dives. And they are now even contaminating other cell lines (which could perhaps be interpreted as just karma). Sadly, the spotlight on Henrietta Lack’s life has sparked arguments amongst the family members around the use and distribution of profits and benefits from the book and movie, and the portrayal of Henrietta Lack’s in the story. Johns Hopkins say they have no rights to the cell line, and have not profited from them, and they have established symposiums, scholarships and awards in Henrietta Lack’s honour.

The NIH has established the HeLa Genome Data Access Working Group, which includes members of Henrietta Lack’s family. Any researcher wanting to use the HeLa cell genome in their research has to request the data from this committee, and explain their research plans, and any potential commercialisation. The data may only be used in biomedical research, not ancestry research, and no researcher is allowed to contact the Lacks family directly.

On Science and Values

 

By Rebecca Delker, PhD

 

In 1972 nuclear physicist Alvin Weinberg defined ‘trans-science’ as distinct from science (references here, here). Trans-science – a phenomenon that arises most frequently at the interface of science and society – includes questions that, as the name suggests, transcend science. They are questions, he says, “which can be asked of science and yet which cannot be answered by science.” While most of what concerned Weinberg were questions of scientific fact that could not (yet) be answerable by available methodologies, he also understood the limits of science when addressing questions of “moral and aesthetic judgments.” It is this latter category – the differentiation of scientific fact and value – that deserves attention in the highly political climate in which we now live.

Consider this example. In 2015 – 2016, action to increase the use of risk assessment algorithms in criminal sentencing received a lot of heat (and rightly so) from critics (references here, here). In an attempt to eliminate human bias from criminal justice decisions, many states rely on science in the form of risk assessment algorithms to guide decisions. Put simply, these algorithms build statistical models from population-level data covering a number of factors (e.g. gender, age, employment, etc.) to provide a probability of repeat offense for the individual in question. Until recently, the use of these algorithms has been restricted, but now states are considering expanding their utility for sentencing. What this fundamentally means is that a criminal’s sentence depends not only on the past and present, but also on a statistically derived prediction of future. While the intent may have been to reduce human bias, many argue that risk assessment algorithms achieve the opposite; and because the assessment is founded in data, it actually serves to generate a scientific rationalization of discrimination. This is because, while the data underpinning the statistical models does not include race, it requires factors (e.g. education level, socioeconomic background, neighborhood) that are, themselves, revealing of centuries of institutionalized bias. To use Weinberg’s terminology, this would fall into the first category of trans-science: the capabilities of the model fall short of capturing the complexity of race relations in this country.

But this is not the whole story. Even if we could build a model without the above-mentioned failings, there are still more fundamental ethical questions that need addressing. Is it morally correct to sentence a person for crimes not yet committed? And, perhaps even more crucial, does committing a crime warrant one to lose their right to be viewed (and treated) as an individual – a value US society holds with high regard – and instead be reduced to a trend line derived from the actions of others? It is these questions that fall into the second category of trans-science: questions of morality that science has no place in answering. When we turn to science to resolve such questions, however, we blind ourselves from the underlying, more complex terrain of values that make up the debate at hand. By default, and perhaps inadvertently, we grant science the authority to declare our values for us.

Many would argue that this is not a problem. In fact, in a 2010 TED talk neuroscientist Sam Harris claimed that “the separation between science and human values is an illusion.” Values, he says, “are a certain kind of fact,” and thus fit into the same domain as, and are demonstrable by, science. Science and morality become one in the same because values are facts specifically “about the well-being of conscious creatures,” and our moral duty is to maximize this well being.

The flaw in the argument (which many others have pointed out as well) is that rather than allowing science to empirically determine a value and moral code – as he argued it could – he presupposed it. That the well being of conscious creatures should be valued, and that our moral code should maximize this, cannot actually be demonstrated by science. I will also add that science can provide no definition for ‘well-being,’ nor has it yet – if it ever can – been able to provide answers to the questions of what consciousness is, and what creatures have it. Unless human intuition steps in, this shortcoming of science can lead to dangerous and immoral acts.

What science can do, however, is help us stay true to our values. This, I imagine, is what Harris intended. Scientific studies play an indispensable role in informing us if and when we have fallen short of our values, and in generating the tools (technology/therapeutics) that help us achieve these goals. To say that science has no role in the process of ethical decision-making is as foolish as relying entirely on science: we need both facts and values.

While Harris’ claims of the equivalency of fact and value may be more extreme than most would overtly state, they are telling of a growing trend in our society to turn to science to serve as the final arbiter of even the most challenging ethical questions. This is because in addition to the tangible effects science has had on our lives, it has also shaped the way we think about truth: instead of belief, we require evidenced-based proof. While this is a noble objective in the realm of science, it is a pathology in the realm of trans-science. This pathology stems from an increasing presence in our society of Scientism – the idea that science serves as the sole provider of knowledge.

But we live in the post-fact era. There is a war against science. Fact denial runs rampant through politics and media. There is not enough respect for facts and data. I agree with each of these points; but it is Scientism, ironically, that spawned this culture. Hear me out.

The ‘anti-science’ arguments – from anti-evolution to anti-vaccine to anti-GMO to climate change denial – never actually deny the authority of science. Rather, they attack scientific conclusions by either creating a pseudoscience (think: creationism), pointing to flawed and/or biased scientific reporting (think: hacked Climate data emails), clinging to scientific reports that demonstrate their arguments (think: the now debunked link between vaccines and autism), and by honing in on concerns answerable by science as opposed to others (think: the safety of GMOs). These approaches are not justifiable; nor are they rigorously scientific. What they are, though, is a demonstration that even the people fighting against science recognize that the only way to do so is by appealing to its authority. As ironic as it may be, fundamental to the anti-science argument is the acceptance that the only way to ‘win’ a debate is to either provide scientific evidence or to poke holes in the scientific evidence at play. Their science may be bad, but they are working from a foundation of Scientism.

 

Scientific truth has a role in each of the above debates, and in some cases – vaccine safety, for example – it is the primary concern; but too often scientific fact is treated as the only argument worth consideration. An example from conservative writer Yuval Levin illustrates this point. While I do not agree with Levin’s values regarding abortion, the topic at hand, his points are worth considering. Levin recounts that during a hearing in the House of Representatives regarding the use of the abortion drug RU-486, a DC delegate argued that because the FDA decided the drug was safe for women, the debate should be over. As Levin summarized, “once science has spoken … there is no longer any room for ‘personal beliefs’ drawing on non-scientific sources like philosophy, history, religion, or morality to guide policy.”

When we break down the abortion debate – as well as most other political debates – we realize that it is composed of matters of both fact and value. The safety of the drug (or procedure) is of utmost importance and can, as discussed above, be determined by science; this is a fact. But, at the heart of the debate is a question of when human life begins – something that science can provide no clarity on. To use scientific fact as a façade for a value system that accepts abortion is as unfair as denying the scientific fact of human-caused climate change: both attempts focus on the science (by either using or attacking) in an effort to thwart a discussion that encompasses both the facts of the debate and the underlying terrain of values. We so crave absolute certainty that we reduce complex, nuanced issues to questions of scientific fact – a tendency that is ultimately damaging to both social progress and society’s respect for science.

By assuming that science is the sole provider of truth, our culture has so thoroughly blurred the line between science and trans-science that scientific fact and value are nearly interchangeable. Science is misused to assert a value system; and a value system is misused to selectively accept or deny scientific fact. To get ourselves out of this hole requires that we heed the advice of Weinberg: part of our duty as scientists is to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” Greater respect for facts may paradoxically come from a greater respect for values – or at the very least, allowing space in the conversation for them.

 

How Science Trumps Trump: The Future of US Science Funding

 

By Johannes Buheitel, PhD

I was never the best car passenger. It’s not that I can’t trust others but there is something quite unsettling about letting someone else do the steering, while not having any power over the situation yourself. On Tuesday, November 8th, I had exactly this feeling, but all I could do was to sit back and let it play out on my TV set. Of course, you all know by now, I’m talking about the past presidential election, in which the American people (this excludes me) were tasked with casting their ballots in support for either former First Lady and Secretary of State Hillary Clinton or real estate mogul and former reality TV personality Donald Trump. And for all that are bit behind on their Twitter feed (spoiler alert!): Donald Trump will be the 45th president of the United States of America following his inauguration on January 20th, 2017. Given the controversies around Trump and all the issues he stands for, there are many things that can, have been  and will be said about the implications for people living in the US but also elsewhere. But for us scientists, the most pressing question that is being asked left and right is an almost existential one: What happens to science and its funding in the US?

The short answer is: We don’t know yet. Not only has there been no meaningful discussion about these issues in public (one of the few exceptions being that energy policy question  by undecided voter-turned-meme Ken Bone), but, even more worryingly, there is just not enough hard info on specific policies from the future Trump administration to go on. And that means, we’re left to just make assumptions based on the handful of words Mr. Trump and his allies have shared during his campaign. And I’m afraid, those paint a dire picture of the future of American science.

Trump has not only repeatedly mentioned in the past that he did not believe in the scientific evidence around climate change (even going as far as calling it a Chinese hoax), but also reminded us of his position just recently, when he appointed  known climate change skeptic Myron Ebell to the transition team of the Environmental Protection Agency (EPA). He has furthermore endorsed the widespread (and, of course misguided) belief that vaccines cause autism. His vice president, Mike Pence, publicly doubted  that smoking can cause cancer as late as in 2000, and called evolution “controversial”.

According to specialists like Michael Lubell from the American Physical Society, all of these statements are evidence that “Trump will be the first anti-science president we have ever had.” But what does this mean for us in the trenches? The first thing you should know is that science funding is more or less a function of the overall US discretionary budget, which is in the hand of  the United States Congress, says  Matt Hourihan, director of the R&D Budget and Policy Program for the American Association for the Advancement of Science (AAAS). This would be a relief, if Congress wasn’t, according to Rush Holt, president of the AAAS, on a “sequestration path that […] will reduce the fraction of the budget for discretionary funding.” In numbers, this means that when the current budget deal expires next year, spending caps might drop by another 2.3%. Holt goes on to say that a reversal of this trend has always been unlikely, even if the tables were turned, which doesn’t make the pill go down any easier. Congress might raise the caps, as they have done before, but this is of course not a safe bet, and could translate to a tight year for US science funding.

So when the budget is more or less out of the hands of Donald Trump, what power does he actually possess over matters of research funding? Well, the most powerful political instrument that the president can implement is the executive order. But also this power is not unlimited and could for example not be used to unilaterally reverse the fundamentals of climate policy, said David Goldston from the Natural Resources Defense Council (NRDC) during a Webinar hosted by the AAAS shortly after the election. Particularly, backing out of the Paris agreement, as Trump has threatened to do, would take at least four years and requires support by Congress (which, admittedly, is in Republican hand). And while the president might be able to “scoop out” the Paris deal by many smaller changes to US climate policy, this is unlikely to happen, at least not to a substantial degree, believes Rush Holt. The administration will soon start to feel push-back by the public, which, so Holt during the AAAS Webinar, is indeed not oblivious about the various impacts of climate change, like frequent droughts or the decline of fisheries in the country. There was further consensus among the panelists that science education funding will probably not be deeply affected. First, because this matter usually has bipartisan support, but also because only about 10% of the states’ education funding actually comes from the federal budget.

So, across the board, experts seem to be a reluctantly positive. Whether this is just a serious case of denial or panic control, we don’t know, but even Trump himself has been caught calling for  “investment in research and development across a broad landscape of academia,” and even seems to be a fan of space exploration. Our job as scientists is now, to keep our heads high, keep doing our research to the best of our abilities but also to keep reaching out to the public, invite people to be part of the conversation, and convincing them of the power of scientific evidence. Or to say it with Rush Holt’s words: “We must make clear that an official cannot wish away what is known about climate change, gun violence, opioid addiction, fisheries depletion, or any other public issue illuminated by research.”

 

Taking Genome Editing out of the Lab: Cause for Concern?

By Rebecca Delker, PhD

Genome editing – the controlled introduction of modifications to the genome sequence – has existed for a number of years as a valuable tool to manipulate and study gene function in the lab; however, because of inefficiencies intrinsic to the methods used, the technique has, until now, been limited in scope. The advent of CRISPR/Cas9 genome editing technology, a versatile, efficient and affordable technique, not only revolutionized basic cell biology research but has opened the real possibility of the use of genome editing as a therapy in the clinical setting and as a defense against pests destructive to the environment and human health.

 

CRISPR – Clustered Regularly Interspaced Short Palindromic Repeats – when teamed up with the nuclease, Cas9, to form CRISPR/Cas9 serves as a primitive immune system for bacteria and archaea, able to tailor a specific response to an invading virus. During viral invasion, fragments of the invader’s foreign genome are incorporated between the CRISPR repeats, forever encoding a memory of the attack in the bacterial genome. Upon future attack by the same virus, these memories can be called upon by transcribing the fragments to RNA, which, through Watson-Crick base-pairing, guide Cas9 to the viral genome, targeting it for destruction by induced double strand breaks (DSBs).

 

While an amazing and inspiring piece of biology in its own right, the fame of CRISPR/Cas9 did not skyrocket until the discovery that this RNA/nuclease team could be programmed to target specific sequences and induce DSBs in the complex genomes of all species tested. Of course the coolness factor of CRISPR technology does not end with the induction of DSBs but rather the use of these breaks to modify the genome. Taking advantage of a cell’s natural DNA repair machinery, CRISPR-induced breaks can be repaired by re-gluing the broken ends in a manner that results in the insertion or deletion of nucleotides – indels, for short – that disrupt gene function. More interesting for genome editing, though, DSBs can also serve as a portal for the insertion of man-made DNA fragments in a site-specific fashion, allowing the insertion of foreign genes or replacement of faulty genes.

 

CRISPR/Cas9 is not the first technology developed to precisely edit genomes. The DNA-binding (and cutting) engineered proteins, TALENS and Zinc Finger Nuclease (ZFNs), came into focus first but, compared to the RNA-guided Cas9 nuclease, are just a bit clunky – more complex in design with lower efficiency and less affordable. Even prior to these techniques, the introduction of recombinant DNA technology in the 1970s allowed the introduction of foreign DNA into the genomes of cells and organisms. Mice could be made to glow green using a jellyfish gene before the use of nucleases – just less efficiently. Now, the efficiency of Cas9 and the general ease of use of the technology paired with the decreased costs of genome sequencing enable scientists to edit the genome of just about any species, calling to mind the plots of numerous sci-fi films.

 

While it is unlikely that we will find ourselves in a GATTACA-like situation anytime soon, the potential for the application of CRISPR genome editing to human genomes has sparked conversation in the scientific literature and popular press. Though genome modification of somatic cells (regulators of body function) is generally accepted as an enhanced version of gene therapy, editing of germline cells (carriers of hereditary information) has garnered more attention because of the inheritance of the engineered modifications by generations to come. Many people, including some scientists, view this as a line that should never be crossed and argue that there is a slippery slope between editing disease-causing mutations and creating designer babies. Attempts by a group at Sun Yat-sen University in China to test the use of CRISPR in human embryos was referred to by many as irresponsible and their paper was rejected from top journals including Nature and Science. It should be noted, however, that this uproar occurred despite the fact that the Chinese scientists were working with non-viable embryos in excess from in vitro fertilization and with approval by the appropriate regulatory organizations.

 

Modifying human beings is unnatural; and, as such, seems to poke and prod at our sense of morality, eliciting the knee-jerk response of no. But, designer babies aside, how unethical is it to target genes to prevent disease – the ultimate preventative medicine, if you will? It is helpful to address this question in a broader context. All medical interventions – antibiotics, vaccinations, surgeries – are unnatural, but (generally) their ethics are not questioned because of their life-saving capabilities. If we look specifically at reproductive technology, there is precedent for controversial innovation. In the 1970s when the first baby was born by in vitro fertilization (IVF), people were skeptical of scientists making test-tube babies­ in labs. Now, it is a widely accepted technique and more than 5 million babies have been born with IVF.

 

Moving the fertilization process out of the body allowed for the unique possibility to prevent the transmission of genetic diseases from parent to child. Pre-Implantation Genetic Diagnosis (PGD), the screening of eggs or embryos for genetic mutations, allows for the selection of embryos that are free of disease for implantation. More recently, the UK (although not the US) legalized mitochondrial replacement therapy – a technique that replaces faulty mitochondria of the parental egg with that of a healthy donor either prior to or post fertilization. Referred to in the press as the creation of three-parent babies because genetic material is derived from three sources, this technique aims to prevent the transmission of debilitating mitochondrial diseases from mother to child. To draw clearer parallels to germline editing, mitochondria – energy producing organelles that are the likely descendants of an endosymbiotic relationship between bacteria and eukaryotic cells – contain their own genome. Thus, although mitochondrial replacement is often treated as separate from germline editing because nuclear DNA is left untouched, the genomic content of the offspring is altered. There are, of course, naysayers who don’t think the technique should be used in humans, but largely this is not because of issues of morality; rather, their opposition is rooted in questions of safety.

 

Germline editing could be the next big development in assisted reproductive technology (ART), but, like mitochondrial replacement and all other experimental therapies, safety is of utmost concern. Most notably, the high efficiency of CRISPR/Cas9 relative to earlier technologies comes at a cost. It has been demonstrated in a number of model systems, including the human embryos targeted by the Chinese group, that in addition to the desired insertion, CRISPR results in off-target mutations that could be potentially dangerous. Further, because our understanding of many genetic diseases is limited, there remains a risk of unintended consequences due to unknown gene-environmental interactions or the interplay of the targeted gene and other patient-specific genomic variants. The voluntary moratorium on clinical applications of germline editing in human embryos suggested by David Baltimore and colleagues is fueled by these unknowns. They stress the importance of initiating conversations between scientists, bioethicists, and government agencies to develop policies to regulate the use of genome editing in the clinical setting. Contrary to suggestions by others (and here), these discussions should not impede the progress of CRISPR research outside of the clinical setting. As a model to follow, a group of UK research organizations have publically stated their support for the continuation of genome editing research in human embryos as approved by the Human Fertilisation and Embryology Authority (HFEA), the regulatory organization that oversees the ethics of such research. Already, a London-based researcher has requested permission to use CRISPR in human embryos not as a therapeutic but to provide insight into early human development.

 

Much of the ethics of taking genome editing out of the lab is, thus, intertwined with safety. It is unethical to experiment with human lives without taking every precaution to prevent harm and suffering. Genome editing technology is nowhere near the point at which it is safe to attempt germline modifications, although clinical trials are in progress testing the efficacy of ZFN-based editing of adult cells to reduce viral titers in patients with HIV. This is not to say that we will never be able to apply CRISPR editing to germline cells in a responsible and ethical manner, but it is imperative that it be subject to regulations to assure the safety of humans involved, as well as to prevent the misuse of the technology.

 

This thought process must also be extended to the application of CRISPR to non-human species, especially because it does not typically elicit the same knee-jerk response as editing human progeny. CRISPR has been used to improve the efficiency of so-called gene drives, which guarantee inheritance of inserted genes, in yeast and fruit flies; and they have been proposed for use in the eradication of malaria by targeting the carrier of disease, the Anopheles mosquito. It is becoming increasingly important to consider the morality of our actions with regard to other species, as well as the planet, when developing technologies that benefit humanity. When thinking about the use of CRISPR-based gene drives to manipulate an entire species it is of utmost importance to take into consideration unintended consequences to the ecosystem. Though the popular press has not focused much on these concerns, a handful of scientific publications have begun to address these questions, releasing suggested safety measures.

 

There is no doubt that CRISPR is a powerful technology and will become more powerful as our understanding of the system improves. As such, it is critical to discuss the social implications of using genome editing as a human therapeutic and an environmental agent. Such discussions have begun with the convention in Napa attended by leading biomedical researchers and will likely continue with similar meetings in the future. This dialogue is necessary to ensure equal access to beneficial genome-editing therapies, to develop safeguards to prevent the misuse of technology, and to make certain that the safety of humans and our planet is held in the highest regard. However, too much of the real estate in today’s press regarding CRISPR technology has been fear-oriented (for example) and we run the risk of fuelling the anti-science mentality that already plagues the nation. Thus, it is equally important to focus on the good CRISPR has done and will continue to do for biological and biomedical research.

 

We are rapidly entering a time when the genomes of individuals around the world will be sequenced completely, along with many other organisms on the planet; however, this is just the tip of the iceberg of our understanding of the complex translation of this genome into life. For over a decade we have known the complete sequence of the lab mouse, but our understanding of the cellular processes within this mouse is still growing every day. Thus, there is an important distinction to be made between knowing a DNA sequence and understanding it well enough to be able to make meaningful (and safe) modifications. CRISPR genome editing technology, as it is applied in basic biology, is helping us make this leap from knowing to understanding in order to inform the creation of remedies for diseases that impact people, animals and our planet; and it is doing so with unprecedented precision and speed.

 

We must strike a balance that enables the celebration and use of the technology to advance knowledge, while assuring that the proper regulations are in place to prevent premature use in humans and hasty release into the environment. Or, as CRISPR researcher George Church remarked: “We need to think big, but also think carefully.”

 

Self Appreciation for Postdocs: You ARE Employable!

By Sally Burn

This week was NPAW2015 – no, not National Prosthodontics Awareness Week (which shares the same acronym), but National Postdoc Appreciation Week 2015. The organizers, the National Postdoc Association, champion our rights year round but use this week to focus wider attention on our 90,000 strong ranks and make us feel appreciated. They have their work cut out. If we take salary, job security, academic job prospects, mental health, and work/life balance (particularly for female scientists) as metrics of institutional gratitude, it rapidly becomes clear that postdocs are not poster children for appreciation.

A postdoc, according to Wikipedia, is “a person conducting research after the completion of their doctoral studies (typically a PhD) as part of a temporary appointment, usually in preparation for an academic faculty position.” The problem for modern postdocs, particularly in the life sciences, is that “temporary” is starting to last much longer and the coveted faculty position is becoming harder to attain than twenty years ago. Moving on then to “appreciation” – what does that mean? The first definition I found was “the recognition and enjoyment of the good qualities of someone or something.” It suddenly struck me that instead of waiting for a pat on the head from our employers or the NIH, we should instead be focusing on self appreciation of the qualities that make us good postdocs… and recognizing how valuable these qualities are in the non-academic job market.

Transferable skills are something I’ve been thinking about a lot this year, as I prepare to leave the familiar yet cruel bosom of Mother Academia. When I first started thinking about what I could do next I came up with… nothing. Zilch. Nada. I know how to do embryonic dissections and make various chemical solutions. What possible good would those skills serve in the “real” world? I was, I concluded, likely unemployable as anything other than a postdoc. But I didn’t want to be a PI. And so I reached the internal conflict that so many postdocs encounter: we are single-mindedly trained for a mythical beast of a position and when we don’t attain that position, be it through choice or otherwise, we have no idea what else we can do.

Rather than fall into a pit of despair I’ve spent much of the last year educating myself about what else is out there and, more importantly, how utterly, awesomely qualified I am for it. Turns out, postdocs are super-employable. Not convinced? Here are Scizzle’s top skills that postdocs can bring to the table:

 

1) Research skills

There is a whole world of research outside of academia. And it usually pays way better. Whether your skills are clustered at the forefront of molecular biology, all in silico, or more about standing in rivers collecting insects, they will be highly prized by some employer out there. If you want to stay in research, there are many options: biotech, pharmaceutical, medical devices, government, the list goes on. You are very unlikely to be able to continue your exact current project (possibly a relief to some of us), so think laterally about how your skill set is applicable. You currently culture lung epithelium? Great, you are an expert on epithelial cell biology – cosmetic companies would love to have you in their skin lab. Your postdoc was all about the mouse immune system? Pharmaceutical companies would welcome your expertise in developing human monoclonal antibodies.

 

2) Project management

Postdocs know A LOT about project management; it’s something we do every day. We identify a question and then design a series of experiments to answer it. In planning our experiments we must take into account time, budget, and resources. As the project progresses we must react to failures or unexpected results by designing alternate strategies, again asking do these new plans answer the original question. Once we have data we analyze it and ask whether it answers the question and/or suggests new paths to follow. We often have a set deadline to achieve all this by (paper submission, lab meeting, conference). All in all, postdocs are project management bad asses. In the real world, this translates to being an attractive candidate for jobs as project managers in the pharmaceutical industry and also in many non-research environments.

 

3) Writing and communication

A common stereotype is that scientists are socially inept bad communicators. On the contrary, postdocs are communication polymaths. When you write a paper or grant you are taking your vast background knowledge and several years’ worth of data, and distilling it down into a concise summary of why the question is important, what you found, and what that means, usually for a reader outside of your niche. If you enjoy this process you may be ideal for employment at a medical communications agency. Perhaps what floats your boat is peer reviewing manuscripts, trying to decide whether a new finding adds to the field, and whether the authors really have shown what they say. If so, an editorial career could be in your future. Or maybe the biggest kick you get is presenting your work at conferences and then talking about it to anyone who’ll listen at the networking session. If you are adept at verbally communicating your science, particularly to a non-expert audience, you could thrive as a Medical Science Liaison (MSL). MSLs are experts in a field who interact with medical and academic professionals on behalf of a pharmaceutical company, conveying knowledge about a product to those involved with it.

 

4) Broad knowledge of science and the scientific process

If you are interested in science outside of your field – and are a good communicator – you may want to consider a career in science advocacy, policy, or diplomacy. Science advocacy entails relaying what scientists need, often to the government; science policy involves working on both policies that affect science and on how science shapes policies. On a more international scale, science diplomacy involves scientific collaboration between countries to solve a common problem (we’ve already discussed science diplomacy in depth – see here).

 

5) Ability to quickly assimilate new knowledge

One path taken by ex-postdocs is consultancy. A consultant may one week be asked to provide a solution to dwindling sales of a car, while the next advising a pharmaceutical company on why they should be switching gears to invest in biosimilars. Your postdoc wasn’t on cars or big pharma? Doesn’t matter. The key skill that you have is your ability to research a topic, assimilate the knowledge, critically evaluate it, and come up with new ideas relating to it. This is what consultants do. And they often get paid very handsomely for it.

 

6) Data analysis

All those hours spent processing and looking for patterns in your data have real-world value. Data scientists are in hot demand across a range of industries. And if you have coding skills to throw into the mix (particularly Python and R) then you’re even more attractive. If not, it’s never too late to learn – pick up Python online at Codecademy and R at DataCamp.

 

7) A sterling work ethic

NIH salary for a first year postdoc is $42,840, or $823.85 a week. I am not unique in having worked 12 hour days, seven days a week; a first year postdoc doing this will earn $9.81 an hour, a figure above the federal minimum wage ($7.25) but below the median wage at Costco ($13.14). While earning their $9.81 they will push themselves to get a seemingly hopeless experiment to work, all the while eschewing food, sleep, and normal human contact. Then, once the experiment finally fails they will go home to rest, perhaps cry, definitely eat some ice cream, and then come back again the next day to try something new. The capacity of the postdoc to work hard to achieve results on low pay, with little job security, and with no scope for promotion or financial reward is tremendous. Any employer would be lucky to have a postdoc join their ranks – don’t you forget it!

 

Want to know more about your next move? Do what you know best – research. Attend career panels at your institution, talk to ex-postdocs who’ve moved outside of academia, and set up job searches (for example on LinkedIn or Oystir) based on your skills – just to get an idea of what is out there. Then identify which skills need working on and gain experiences to improve these. An excellent use of your time would be to scoot over to the Independent Development Plan (IDP) website, where you can generate a list of science occupations you are most suited to, based on your answers to an extensive survey of your skills, interests, and values. Your personalized IDP then sets goals for the year, to help you on the way to your ideal career.

21st Century Science: an Academic Pyramid Scheme?

 

By John McLaughlin

Academic science is traditionally built on an apprenticeship model, in which a student works under the mentorship of a principal investigator, learning the skills of the trade and preparing to be an independent researcher. After a few years of training as a post-doctoral fellow, a scientist would likely obtain a tenure-track position at a university (if choosing the academic route) and mentor the next generation of scientists, continuing the academic circle of life. In the past few decades, this situation has drastically changed.

 

As most graduate students and post-docs have probably noticed, there has been an enormous amount of discussion on the difficulties of landing a good academic job following the PhD. In searching for the causes of this phenomenon, commentators have described several factors, two of the most salient being the recent stagnation in NIH funding (adjusted for inflation), and a dramatic increase in the number of PhDs awarded in the natural sciences. To provide context for the situation in the U.S., in the past three decades about 800,000 PhDs were awarded in science and engineering fields, compared to ~100,000 tenure-track positions created in the same time frame. These forces have changed the structure of the scientific academy, the result being a new arena in which many PhDs are competing for a smaller number of academic jobs, and with those who land one often shuttling between low-paying adjunct positions with meager benefits and no possibility of tenure.

Economists studying the U.S. scientific academy, particularly the post-doctoral fellow system, have gone so far as to describe it as a “pyramid scheme.” This type of financial scheme operates by luring new investors with the promise of an easy payout; but the players nearer the top profit the most, at the expense of those at the bottom.
Post-doctoral fellows, often the main workhorse of a biology research lab, are cheap (~$40,000 starting salary in U.S.) and replaceable, owing to the large excess of PhDs on the market; graduate students are even cheaper, as they often teach to earn their salaries. And a principal investigator (PI) running a large, well-funded lab will gain status and prestige for all grants and publications generated by their personnel.

 

Despite the less than ideal job prospects awaiting science PhDs, the government and media continue to strongly advocate education in the STEM fields, encouraging more undergraduates to pursue STEM majors and thereby increasing the number at the graduate level. While U.S. society’s general enthusiasm and respect for science is definitely positive, it is irresponsible to push so many young people into this career path without making substantial funding commitments. Certainly, not all PhD students intend to pursue a career in academia, and those who do may later find that their passion lies elsewhere, for instance in a biotechnology field. However, one should keep in mind that the past decade has also been rough for the U.S. pharmaceutical industry. Since 2000, thousands of U.S. and European industry research positions have been lost, while several “big pharma” firms plan to open new R&D centers in Asia, where costs are lower.

 

Although the outlook might seem bleak for those currently navigating these turbulent academic waters, the calls of post-doctoral advocacy organizations for increased salaries and benefits may finally be making a difference. This year, the NIH increased the base salary of its National Research Service Award post-doctoral trainees, and other institutions have increased post-doctoral pay and benefits, resulting in higher post-doc satisfaction.

 

These proposals will not only increase the quality of life for current post-docs, but also change the incentive structure of the marketplace: as laboratory personnel become more expensive, PIs will hire more selectively. Fewer PhDs will enter the post-doctoral route, either opting to pursue a career in industry or another field entirely. It may take years for these policy changes to be fully implemented, but hopefully academic scientists will be able to pursue their passion without fearing for their livelihoods or career prospects.

East of Eden: The Suboptimal State of Funding in the Natural Sciences

By Asu Erden

“Don’t stay here, go to the U.S. if you can.” I heard my fair share of invaluable insight into the world of scientific research during my time at the Pasteur Institute in Paris, but this one really stuck with me. “The difference between Europe and the United States is that, if there are about ten hypotheses you can formulate to address a specific question, in France we have to choose the three or four more likely ones to test. In the U.S., they can test all ten in a heartbeat without worrying about funding.” This romanticized view of scientific research in the U.S. held some truth to it when I heard it back in 2005. But while the U.S. seemed to be the Eden of scientific funding in the early 2000s, funding cuts have had a tremendous impact on the state of research in the natural sciences on this side of the pond too.

 

A historical overview of public science funding

 

The National Institutes of Health (NIH) is the United States government’s medical research agency and the largest source of funding for medical research in the world. However over the last decade, it has not been able to fund as many projects as it used to. The funding for research project grants by the NIH – including the much coveted R01 grants which determine a lot of the tenure track positions in the natural sciences – increased steadily between 1995 and 2003, but has decreased by over 20% since 2004. “With shrinking government funding (or flat-lined, which is the same as shrinking), labs have to look for alternative sources, it’s just a fact of the situation,” admitted Dr. Heather Marshall, a former postdoctoral researcher in the Immunobiology Department at Yale University.

 

The percentage of successful research grant applications to the NIH was of 26.8% in 1995, reached 32.0% in 1999, before decreasing to 17.5% by 2013. “The state of funding most definitely shifted during my postdoc and it was most evident when discussing with PIs [ED: principal investigators], successful ones at that.  The attitude was depressing and demoralizing.  The funding percentiles of postdoctoral fellowships went down each year I was a postdoc and it became evident that attaining a fellowship was mostly out of your control,” shared Dr. Marshall. The success rate for new applications, which new faculty members rely on to start up their labs and research, has fluctuated between 18.6% in 1995, reached approximately 22% by 1999, before plummeting to 13.4% in 2013. According to the latest numbers, this means that investigators are 37.8% less likely to obtain an R01 or equivalent award today than they were in 2003. This decrease is partially explained by the NIH budget being cut by over one fifth of what it was in 1995.

 

Most of the time, it is the task of PIs – i.e. the professors running labs – to write grants lauding the importance of the research carried out in their laboratories in order to ensure the future of their line of research and that of their trainees. Graduate students and postdoctoral researchers also apply for fellowships and grants but the pecuniary benefits at stake only really affect their own work, not that of the lab overall. With the success rates for grants – new or continued – decreasing over the last decade and a half, grant applications have become particularly stressful endeavors. While PIs usually serve as a protective shield from this reality, it seems to be increasingly hard to cut the stress out…”I’ve been in research since 2003. PIs are communicating their stress more,” said Dr. Smita Gopinath, a postdoctoral researcher immunology at Yale University. The situation is not as dire in Ivy League or top tier universities. But for those PIs that do not work for private universities, loss in funding means laying off personnel or closing their lab altogether. This hinders research and in the long run biomedical advances.

 

With this decrease in funding for research in the natural sciences comes a sense of lack of job security for the extremely skilled highly educated workers that scientific researchers are. “Funding directly impacts the number of jobs available, so in that sense the state of funding absolutely played a major role in my decision to leave academia,” said Dr. Marshall. “In addition to that, I was worried that I wouldn’t get enough grants to fund the projects I wanted to take on, which would also impact my ability to train students and postdocs.  That caused a lot of anxiety and I didn’t even have a job yet!”

 

The budget fluctuations in the NIH and other science funding agencies accompany political changes in the White House, the Senate, and the Congress. During the Bush presidency, the NIH funding initially increased between 2001 and 2003 but suffered a big decrease from then onwards. Entire fields of research were completely defunded such as stem cell research. In 2009, Obama refunded these fields but the NIH has not seen an increase in budget to match inflation. Over the last few years, Republicans have increasingly criticized a number of national agencies such as the National Science Foundation (NSF) and the NIH for the type of research they fund. “Sometimes these dollars they go to projects having little or nothing to do with the public good. Things like fruit fly research in Paris, France. I kid you not!” exclaimed McCain’s running buddy, ever-entertaining Sarah Palin, during the 2008 presidential campaign.

 

The importance of funding basic sciences

 

You may argue that basic research does not always lead to biomedical advances that can be translated to the treatment, cure, or prevention of infectious or non-infectious diseases in humans. “It’s like the mosquito bed net problem. I have so many friends that work on malaria but stop and think, “My stipend can buy 200 bed nets, what am I doing with my life when I could save people directly?” This is the eternal struggle,” shared Dr. Gopinath. “Why fund basic science? Because at some point bed nets will not be effective enough.” There lies the central problem of basic science funding. It can take years between the research and its putative application for humans. This time lag has affected the mentality of the funding agencies. There is a growing gap between what the NSF, which has traditionally funded more basic research in the natural sciences, and the NIH fund, which now funds more translatable research endeavors.

 

But you never know where the next thing is going to come from. The nature of basic research is that while fuelled first by scientific curiosity, it also aims to develop our understanding of the world surrounding us in order to potentially make translational contributions. Let’s take Sarah Palin’s insightful comments on fruit fly research. In 1933, Dr. Thomas Hunt Morgan received the Nobel Prize for his research on the inheritance of physical traits. His animal model? The fruit fly. Since then, fruit fly research has led to the identification of genes as the unit of biological inheritance, to understanding how organismal ontology works, and to the now growing field of epigenetics. Working on fruit flies, scientists have also been able to identify key components of the immune system, which in the long run increased our power to medically reduce human suffering. Drosophila melanogaster – one of the more studied fruit fly species – has provided much insight into the role of genes in neurological behavior including human genes involved in autism. The irony…I kid you not Mrs. Palin!

 

 

Many of the drugs and treatments we use today are derived from such discoveries in the basic natural sciences. 40% of the medical drugs we use target a protein family known as G protein-coupled receptors (GPCRs), which translate signals external to the cell into intracellular signaling. Hormones, neurotransmitters in the brain, and even light can activate these receptors leading to biological processes such as vision, taste, smell, mood regulation, and that of the immune system. Drs. Brian Kobilka and Robert Lefkowitz won the Nobel Prize in Chemistry – not Physiology and Medicine – for solving the crystal structure of this class of transmembrane proteins. Their work focused on the chemical structure of GPCRs. This had immense ramifications in understanding how these receptors transduce signal from outside the cell by interacting with components inside the cell. Eventually, it led to the better understanding of the cellular and physiological processes these receptors are involved in which allowed the scientific community to recognize their central importance in drug targeting. From crystals we reached therapy.

 

Other examples of basic science research leading to translational advances abound. Dr. Jennifer Doudna moved to the University of California Berkeley in 2002 where she started studying how bacteria can defend themselves against viruses that infect them, also known as bacteriophages. In particular, she was interested in the clustered regularly interspaced short palindromic repeats – CRISPR – in bacterial genomes that enable these microbes to kill off bacteriophages that previously infected them. With help from collaborators, Dr. Doudna was able to identify Cas9 as the protein allowing for this viral DNA editing. Thus was born the CRISPR-Cas9 system. Since its discovery in 2012, this system has allowed the genome editing of multiple cell lines commonly used in research, but also organ-specific genetic editing in mice. The method allows scientists to make mouse lines with permanent gene silencing – also known as knock-outs – in a matter of a few weeks where it previously took them years to breed the gene of interest out.  Moreover, CRISPR-Cas9 allows researchers to delete genes of interest in fully developed mice, as opposed to embryonic deletions of genes, which can prove fatal if they are required during development. The technique is set to allow for great medical advances especially if applied to the genome editing of hematopoietic cells to cure blood disorders such as sickle cell anemia, primary immunodeficiencies (such as AIDS), and cancer. When Dr. Doudna’s research was funded, no one knew the implications it would have. It took over ten years to go from better understanding bacterial defenses against viruses to developing an incredibly potent tool that will potentiate the cure of many human woes.

 

Better tailored funding for the natural sciences involves better communication

 

There is a mismatch between the public’s understanding of the importance their taxes play in funding fundamental scientific advances and scientists pleading for politicians not to further cut their funding. As Dr. Marshall pointed out “We can’t really expect all government officials to have strong science backgrounds if they are also expected to have strong backgrounds in law, history, economics etc., but we absolutely need to have our representatives surrounded by scientists.  So from that perspective, [better science funding] does start with the general public.” The nature of scientific funding, as with all funding, is that it is limited. “We can’t rely on our achievements alone. We need to put it out there and communicate the importance of scientific research,” shared Dr. Gopinath. “We need to be managers and communicators. We do get training to collaborate with other scientists and communicate on that level. Communicating science to non-scientists is not at all on our radar!”

 

It is usually the University Office, which is in charge of what gets or does not get communicated about the scientific research carried out on campuses. While the impetus should not be on scientists to carry out science communication by themselves, additional training of PhD students, postdoctoral fellows, and PIs is required. When a journalist picks up the phone to talk about the latest advance in stem cell research, she should not face a public relations wall. Perhaps unbeknownst to the public, researchers have to meet an astounding number of training requirements annually to be able to continue carrying out their research: radiation safety, animal care and use, biosafety, laboratory chemical safety, medical surveillance for animal handlers, blood-borne pathogens and so on and so forth. To these requirements we should add science communication.  The ramifications are countless!

Neurodevelopment and the Health-Wealth Gap

 

By Danielle Gerhard

 

The famous Roman poet Virgil said that the greatest wealth is health. But what if your actual wealth affects your access to health?

 

It is estimated that more than 45 million Americans, or 14.5% of the population, live below the poverty line, according to the most recent Census Bureau survey. Although slightly lower than previous years, the poverty rate for children under 18 is still startlingly high: 19.9%. Poverty dictates how an individual lives their life and most importantly, what resources they have easy access to. Proper nutrition, environmental stimulation, basic healthcare, and family nurturing are all resources shown to aid healthy development yet are lacking in low-income communities.

 

An individual’s zip code is considered to be as much of a risk to one’s health as their genetics. Dr. Melody Goodman of Washington University in St. Louis researches the contribution of social risk factors to health disparities in local communities. One particular area in St. Louis, known as the Delmar Divide, is a stark example of how location is predictive of education and health. To the south of Delmar Boulevard is a largely white community with an average income of $47,000 and 67% of residents having a bachelor’s degree. Directly north of Delmar Boulevard is a predominantly African American community with a lower average income of $22,000 and only 5% of residents have a bachelor’s degree. In addition to income and education following the so-called Delmar Divide, health is also negatively affected. Higher rates of cancer, heart disease and obesity are only a few of the diseases plaguing these neglected, low-income neighborhoods.

 

Because our brains are rapidly developing during childhood, this leaves them more vulnerable to stress and environmental changes. Recently scientists have extended their efforts to better understand the long-lasting effects of income and environment on the brain and behavior. There have been a number of studies that look at the behavioral consequences of growing up in disadvantaged families, including increased risk for behavioral disorders, developmental delays, and learning disabilities. Fewer human studies have looked into the long-lasting effects of childhood poverty on brain regions known to be critical for executive function, attention and memory. Two studies published recently attempt to investigate this very question using a large-scale, longitudinal design in children between 3 and 20 years of age coming from different socioeconomic backgrounds.

 

One longitudinal, multi-site study published in JAMA Pediatrics investigated whether or not childhood poverty caused significant structural impairments in brain regions known to be important for academic performance. Key regions targeted in the study include the frontal lobes, involved in behavioral inhibition and emotion regulation, the temporal lobes, important for language and memory, and the hippocampus, a region shown to be critical for long-term memory as well as spatial and contextual memory. Demographic information and neuroimaging data was collected from nearly 400 economically diverse participants who were controlled for potential confounding factors such as health problems during or after pregnancy, complicated medical histories, familial history of psychiatric disorders, and behavioral deficits.

 

As hypothesized, children raised in low-income families had lower scores on the Wechsler Abbreviated Scale of Intelligence (WASI), which measures intelligence via verbal and performance IQ, and the Woodcock-Johnson III Tests of Achievement (WJ-III), a test for math skills and reading comprehension. Anatomically, children raised in low-income families showed reductions in gray matter (or volume – where most of the brain’s cells are housed), in the frontal and temporal lobes as well as in the hippocampus, with the largest deficits seen in children living well below the federal poverty line.

 

Another study recently published in Nature Neuroscience reported similar findings. The authors investigated whether poverty, defined by a parent’s education level and income, is predictive of neurodevelopmental deficits in key brain regions. As hypothesized, income is related to structural impairments in brain regions important for reading, language, and other executive skills. Similar to the study published in JAMA Pediatrics, this study found the strongest interaction in children from the poorest families.

 

These studies highlight the importance of access to beneficial resources during childhood and adolescence and how income and environment can drastically affect the trajectory of health and development of brain regions key to success into adulthood. A number of different programs for social change that are guided by empirical data and public policy are being implemented in disadvantaged communities. Sending healthcare workers out of the clinic and into these communities is a step in the right direction. However, some clinicians argue that this is unsustainable and instead advocate taking further steps towards training individuals who live in these communities and/or have healthcare providers move into these communities.

 

Furthermore, initiatives focusing on children and adolescents, in particular, could prevent more problems, possibly irreversible ones, from occurring down the road. Interventions directed towards reducing income inequality, improving nutrition, and increasing access to educational opportunities could drastically redirect a child’s trajectory into adulthood. Early education programs targeting children aged 3-5 years of age have been shown to improve future education attainment and earnings as well as reduce crime and adult poverty.

 

An unhealthy, broken social support system nurses an unhealthy, broken environment in disadvantaged regions lacking basic resources. Scientific knowledge can help direct public policy initiatives towards programs that could have greater impacts on society. A continued dialogue among scientists, politicians, and community activists is vital to the health not only of the children growing up in low-income communities but arguably to the health of our society as a whole. Solely placing funds and resources towards ameliorating adult poverty is akin to placing a band-aid on the problem. Today’s children are tomorrow’s adults, thus helping today’s children help’s tomorrow’s adults.

Measuring the Value of Science: Keeping Bias out of NIH Grant Review

 

By Rebecca Delker, PhD

Measuring the value of science has always been – and, likely, will always remain – a challenge. However, this task, with regard to federal funding via grants, has become increasingly more daunting as the number of biomedical researchers has grown substantially and the available funds contracted. As a result of this anti-correlation, funding rates for NIH grants, most notably, the R01, have dropped precipitously. The most troubling consequences of the current funding environment are (1) the concentration of government funds in the hands of older, established investigators at the cost of young researchers, (2) a shift in the focus of lab-heads toward securing sufficient funds to conduct research, rather than the research itself and (3) an expectation for substantial output, increasing the demands for preliminary experiments and discouraging the proposal of high-risk, high-reward projects. The federal grant system has a direct impact on how science is conducted and, in its current form, restricts intellectual freedom and creativity, promoting instead guaranteed, but incremental, scientific progress.

 

History has taught us that hindsight is the only reliable means of judging the importance of science. It was sixteen years after the death of Gregor Mendel – and thirty-five years after his seminal publication – before researchers acknowledged his work on genetic inheritance. The rapid advance of HIV research in the 1980s was made possible by years of retroviral research that occurred decades prior. Thus, to know the value of research prior, or even a handful of years after publication, is extremely difficult, if not impossible. Nonetheless, science is an innately forward-thinking endeavor and, as a nation, we must do our best to fairly distribute available government funds to the most promising research endeavors, while ensuring that creativity is not stifled. At the heart of this task lies a much more fundamental question – what is the best way to predict the value of scientific research?

 

In a paper published last month in Cell, Ronald Germain joins the conversation of grant reform and tackles this question by proposing a new NIH funding system that shifts the focus from project-oriented to investigator-oriented grants. He builds his new system on the notion that the track record of a scientist is the best predictor of future success and research value. By switching to a granting mechanism similar to privately funded groups like the HHMI, he asserts, the government can distribute funds more evenly, as well as free up time and space for creativity in research. Under the new plan, funding for new investigators would be directly tied to securing a faculty position by providing universities “block grants,” which are distributed to new hires. In parallel, individual grants for established investigators would be merged into one (or a few) grant(s), covering a wider range of research avenues. For both new and established investigators, the funding cycle would be increased to 5-7 years and – the most significant departure from the current system – grant renewal dependent primarily on a retrospective analysis of work completed during the prior years. The foundation for the proposed granting system relies on the assumption that past performance, with regard to output, predicts future performance. As Germain remarks, most established lab-heads trust a CV over a grant proposal when making funding decisions; but it is exactly this component of the proposal – of our current academic culture – that warrants a more in-depth discussion.

 

Germain is not the first to call into question the reliability of current NIH peer reviews. As he points out, funding decisions for project-oriented grants are greatly influenced by the inclusion of considerable preliminary data, as well as form and structure over content. Others go further and argue that the peer review process is only capable of weeding out bad proposals, but fails at accurately ranking the good. This conclusion is supported by studies, which establish a correlation between prior publication, not peer review score, and research outcome. (It should be noted that a recent study following the outcomes of greater than 100,000 funded R01 grants found that peer review scores are predictive of grant outcome, even when controlling for the effects of institute and investigator. The contradictory results of these two studies cannot yet be explained, though anecdotal evidence falls heavily in support of the former conclusions.)

 

Publication decisions are not without biases. Journals are businesses and, as such, benefit from publishing headline-grabbing science, creating an unintended bias against less trendy, but high quality, work. The more prestigious the journal, the higher its impact factor, the more this pressure seems to come into play. Further, just as there is a necessary skill set associated with successful grant writing that goes beyond the scientific ideas, publication success depends on more factors than the research itself. An element of “story-telling” can make research much more appealing; and human perception of the work during peer review can easily be influenced by name recognition of the investigator and/or institute. I think it is time to ask ourselves if past publication record is truly predictive of future potential, or, if it simply eases the way to additional papers.

 

In our modern academic culture, the quality of research and of scientists is often judged by quantitative measures that, at times, can mask true potential. Productivity, as measured by the number of papers published in a given period of time, is a standard gaining momentum in recent years to serve as a meaningful evaluation of the quality of a scientist. As Germain states, a “highly competent investigator” is unlikely “to fail to produce enough … to warrant a ‘passing grade’.” The interchangeability of competence and output has been taken to such extremes that pioneering physicist and Nobel Prize winner, Peter Higgs, has publicly stated that he would be overlooked in current academia because of the requirement to “keep churning out papers.” The demand for rapid productivity and high impact factor has caused an increase in the publication of poorly validated findings, as well as in retraction rates due to scientific misconduct. The metrics used currently to value science are just as, if not more, dangerous to the progress of science as the restrictions placed on research by current funding mechanisms.

 

I certainly do not have a fail-proof plan to fix the current funding problems; I don’t think anyone does. But, I do think that we need to look at grant reform in the context of the larger issues plaguing biomedical sciences. As a group of people who have chosen a line of work founded in doing/discovering/inventing the impossible, we have taken the easy way out when approached with measuring the value of research. Without the aid of hindsight, this task will never be objective and assigning quantitative measures like impact factor, productivity, and the h-index has proven only to generate greater bias in the system. We must embrace the subjectivity present in our review of scientific ideas while remaining careful not to vandalize scientific progress with bias. Measures to bring greater anonymity to the grant review process and greater emphasis on qualitative and descriptive assessments of past work and future ideas will help lessen the influence of human bias and make funding more fair. As our culture stands, a retrospective review process, as Germain proposes, with a focus on output runs the risk of adopting into the grant review process our flawed, and highly politicized, methods of judging the quality of science. I caution that in parallel to grant reform, we begin to initiate change in the metrics we use to measure the value of science.

 

Though NIH funding-related problems and the other systemic flaws of our culture seem at an all time high right now, the number of publications addressing these issues has also increased, especially in recent years. Now, more than ever, scientists at all stages recognize the immediacy of the problems and are engaging in conversations both in-person and online to brainstorm potential solutions. A new website  serves as a forum for all interested to join the discussion and contribute reform ideas – grant, or otherwise. With enough ideas and pilot experiments from the NIH we can ensure that the best science is funded and conducted. Onward and upward!