The Danger of Absolutes in Science Communication

 

By Rebecca Delker, PhD

Complementarity, born out of quantum theory, is the idea that two different ways of looking at reality can both be true, although not at the same time. In other words, the opposite of a truth is not necessarily a falsehood. The most well known example of this in the physical world is light, which can be both a particle and a wave depending on how we measure it. Fundamentally, this principle allows for, and even encourages, the presence of multiple perspectives to gain knowledge.

 

This is something I found myself thinking about as I witnessed the twitter feud-turned blog post-turned actual news story (and here) centered around the factuality of physician-scientist Siddhartha Mukherjee’s essay, “Same but Different,” published recently in The New Yorker. Weaving personal stories of his mother and her identical twin sister with experimental evidence, Mukherjee presents the influence of the epigenome – the modifications overlaying the genome – in regulating gene expression. From this perspective, the genome encodes the set of all possible phenotypes, while the epigenome shrinks this set down to one. At the cellular level – where much of the evidence for the influence of epigenetic marks resides – this is demonstrated by the phenomenon that a single genome encodes for the vastly different phenotypes of cells in a multicellular organism. A neuron is different from a lymphocyte, which is different from a skin cell not because their genomes differ but because their transcriptomes (the complete set of genes expressed at any given time) differ. Epigenetic marks play a role here.

 

While many have problems with the buzzword status of epigenetics and the use of the phrase to explain away the many unknowns in biology (here, here), the central critique of Mukherjee’s essay was the extent to which he emphasized the role of epigenetic mechanisms in gene regulation over other well-characterized players, namely transcription factors – DNA binding proteins that are undeniably critical for gene expression. However, debating whether the well-studied transcription factors or the less well-established epigenetic marks are more important is no different than the classic chicken or egg scenario: impossible to assign order in a hierarchy, let alone separate from one another.

 

But whether we embrace epigenetics in all of its glory or we couch the term in quotation marks – “epigenetics” – in an attempt to dilute its impact, it is still worth pausing to dissect why a public exchange brimming with such negativity occurred in the first place.
“Humans are a strange lot,” remarked primatologist Frans de Waal. “We have the power to analyze and explore the world around us, yet panic as soon as evidence threatens to violate our expectations” (de Waal, 2016, p.113). This inclination is evident in the above debate, but it also hints at a more ubiquitous theme of the presence of bias stemming from one’s group identity. Though de Waal deals with expectations that cross species lines, even within our own species, group identity plays a powerful role in dictating relationships and guiding one’s perspective on controversial issues. Studies have shown that political identities, for example, can supplant information during decision-making. Pew Surveys reveal that views on the issue of climate change divide sharply along partisan lines. When asked whether humans are at fault for changing climate patterns, a much larger percentage of democrats (66%) than republicans (24%) answered yes; however, when asked what the main contributor of climate change is (CO2), these two groups converged (democrats: 56%, republicans: 58%; taken from Field Notes From a Catastrophe, p. 199-200). This illustrates the potential for a divide between one’s objective understanding of an issue and one’s subjective position on that issue – the latter greatly influenced by the prevailing opinion of their allied group.

 

Along with group identity is the tendency to eschew uncertainty and nuance, choosing solid footing no matter how shaky the turf, effectively demolishing the middle ground. This tendency has grown stronger in recent years, it seems, likely in response to an increase in the sheer amount of information available. This increased complexity, while important in allowing access to numerous perspectives on an issue, also triggers our innate response to minimize cost during decision-making by taking “cognitive shortcuts” and receiving cues from trusted authorities, including news outlets. This is exacerbated by the rise in the use of social media and shrinking attention spans, which quench our taste for nuance in favor of extremes. The constant awareness of one’s (online) identity in relation to that of a larger group encourages consolidation around these extremes. The result is the transformation of ideas into ideologies and the polarization of the people involved.

 

These phenomena are evident in the response to Mukherjee’s New Yorker article, but they can be spotted in many other areas of scientific discourse. This, unfortunately, is due in large part to a culture that rewards results, promotes an I-know-the-answer mentality, and encourages its members to adopt a binary vision of the world where there is a right and a wrong answer. Those who critiqued Mukherjee for placing too great an emphasis on the role of epigenetic mechanisms responded by placing the emphasis on transcription factors, trivializing the role of epigenetics. What got lost in this battle of extremes was a discussion of the complementary nature of both sets of discoveries – a discussion that would bridge, rather than divide, generations and perspectives.

 

While intra-academic squabbles are unproductive, the real danger of arguments fought in absolutes and along group identity lines lays at the interface of science and society. The world we live in is fraught with complex problems, and Science, humanity’s vessel of ingenuity, is called upon to provide clean, definitive solutions. This is an impossible task in many instances as important global challenges are not purely scientific in nature. They each contain a very deep human element. Political, historical, religious, and cultural views act as filters through which information is perceived and function to guide one’s stance on complex issues. When these issues include a scientific angle, confidence in the institution of science as an (trustworthy) authority plays a huge role.

 

One of the most divisive of such issues is that of genetically modified crops (GMOs). GMOs are crops produced by the introduction or modification of DNA sequence to incorporate a new trait or alter an existing trait. While the debate spans concerns about the safety of GMOs for human health and environmental health to economic concerns over the potential disparate benefits to large agribusiness and small farmers, these details are lost in the conversation. Instead, the debate is reduced to a binary: pro-GMO equals pro-science, anti-GMO equals anti-science. Again, the group to which one identifies, scientists included, plays a tremendous role in determining one’s stance on the issue. Polling public opinion reveals a similar pattern to that of climate change. Even though awareness of genetic engineering in crops has remained constantly low over the years, beliefs that GMOs pose a serious health hazard have increased. What’s worse, these debates treat all GMO crops the same simply because they are produced with the same methodology. While the opposition maintains a blanket disapproval of all engineered crops, the proponents don’t fare better, responding with indiscriminate approval.

 

Last month The National Academy of Sciences released a comprehensive, 420-page report addressing concerns about GMOs and presenting an analysis of two-decades of research on the subject. While the conclusions drawn largely support the idea that GMOs pose no significant danger for human and environmental health, the authors make certain to address the caveats associated with these conclusions. Though prompted by many to provide the public with “a simple, general, authoritative answer about GE (GMO) crops,” the committee refused to participate in “popular binary arguments.” As important as the scientific analysis is this element of the report, which serves to push the scientific community away from a culture of absolutes. While the evidence at hand shows no cause-and-effect relationship between GMOs and human health problems, for example, our ability to assess this is limited to short-term effects, as well as by our current ability to know what to look for and to develop assays to do so. The presence of these unknowns is a reality in all scientific research and to ignore them, especially with regard to complex societal issues, only serves to strengthen the growing mistrust of science in our community and broaden the divide between people with differing opinions. As one review of the report states, “trust is not built on sweeping decrees.”

 

GMO crops, though, is only one of many issues of this sort; climate change and vaccine safety, for example, have been similarly fraught. And, unfortunately, our world is promising to get a whole lot more complicated. With the reduced cost of high-throughput DNA sequencing and the relative ease of genome editing, it is becoming possible to modify not just crops, but farmed animals, as well as the wild flora and fauna that we share this planet with. Like the other issues discussed, these are not purely scientific problems. In fact, the rapid rate at which technology is developing creates a scenario in which the science is the easy part; understanding the consequences and the ethics of our actions yields the complications. This is exemplified by the potential use of CRISPR-driven gene drives to eradicate mosquito species that serve as vectors for devastating diseases (malaria, dengue, zika, for example). In 2015, 214 million people were affected by malaria and, of those, approximately half a million died. It is a moral imperative to address this problem, and gene drives (or other genome modification techniques) may be the best solution at this time. But, the situation is much more complex than here-today, gone-tomorrow. For starters, the rise in the prevalence of mosquito-borne diseases has its own complex portfolio, likely involving climate change and human-caused habitat destruction and deforestation. With limited understanding of the interconnectedness of ecosystems, it is challenging to predict the effects of mosquito specicide on the environment or on the rise of new vectors of human disease. And, finally, this issue raises questions of the role of humans on this planet and the ethics of modifying the world around us. The fact is that we are operating within a space replete with unknowns and the path forward is not to ignore these nuances or to approach these problems with an absolutist’s mindset. This only encourages an equal and opposite reaction in others and obliterates all hope of collective insight.

 

It is becoming ever more common for us to run away from uncertainty and nuance in search of simple truths. It is within the shelter of each of our groups and within the language of absolutes that we convince ourselves these truths can be found; but this is a misconception. Just as embracing complementarity in our understanding of the physical world can lead to greater insight, an awareness that no single approach can necessarily answer our world’s most pressing problems can actually push science and progress forward. When thinking about the relationship of science with society, gaining trust is certainly important but not the only consideration. It is also about cultivating an understanding that in the complex world in which we live there can exist multiple, mutually incompatible truths. It is our job as scientists and as citizens of the world to navigate toward, rather than away from, this terrain to gain a richer understanding of problems and thus best be able to provide a solution. Borrowing the words of physicist Frank Wilczek, “Complementarity is both a feature of physical reality and a lesson in wisdom.”

 

Beyond Neuromania

By Celine Cammarata

As someone within the field, it seems to me that neuroscience – in some form or another – appears in the media nearly every day. Indeed the term “neuromania”, originally coined by Raymond Tallis, has come into use to describe both the lofty claims made about the power of neuroscience to answer nearly every question and the general mainstream media frenzy surrounding the field. Scholars have paid increasing attention to this and it is often regarded as a problem, but more recent work suggests that despite the mania, neuroscience is still not widely understood or even considered by the public at large. So does all the hype conceal a true lack of public interest?

It’s undeniable that neuroscience is the target of extensive and potentially problematic media attention. In a 2012 Neuron editorial, O’Connor, Reese and Joffe examined the coverage of neuroscience-related topic in six UK newspapers from 2000-2010 and found that not only did the number of articles related to brain research nearly double from the early to the late 2000s, but the topics also changed and the implications of neuroscience research was often exaggerated. Whereas in the past neuroscience was generally reported on in relation to physical pathology, the authors found that in their sample the most common context for discussing neuroscience was that of brain enhancement and protection – topics that are both more widely applicable to a broad audience and that suggest a newly emerging sense of ownership over ones’ brain.  O’Connor et al describe that “although clinical applications retained an important position in our sample, neuroscience was more commonly represented as a domain of knowledge relevant to ‘‘ordinary’’ thought and behavior and immediate social concerns. Brain science has been incorporated into the ordinary conceptual repertoire of the media, influencing public under- standing of a broad range of events and phenomena.”

Such issues are also highlighted in Satel and Lilienfeld’s 2013 book Brainwashed: the Seductive Appeal of Mindless Neuroscience, in which the authors explore – and lament – the at times unrestrained application of functional magnetic resonance imaging (fMRI) to answer questions from “Pepsi or Coke?” to “does free will exist?”. The tantalizing ability to see the brain has carried neuroscience into the realms of marketing, politics, law and more, not to mention changing the way we think about more standard brain research topics such as addiction. But, the authors point out, pictures of a brain alone can not address every level of analysis and are not inherently of greater scientific value than are other research methodologies. Tracking the physical footprint of a desire, attitude, or propensity in the brain does not in and of itself tell you why or how these things emerged, nor can it be necessarily used to assign guilt, decide what is a “disease” and what not, or determine how people choose their politicians – and yet this is precisely what neuroscience is often touted to do.

Both of these works, and many others, are based on the premise that neuroscience has become markedly pervasive, nearly omnipresent. Fascinatingly, though, the brain craze seems to stop short of making a final leap from the media to public consciousness. To be sure, public interest in neuroscience does exist – someone must be buying the growing number of brain-centered books popping up at Barnes and Nobel, right? – but a 2014 paper by the same authors as the Neuron piece found that the public in general is not nearly so interested in neuroscience as the media frenzy and emergence of the brain in societal matters might suggest.

To probe how everyday citizens think about neuroscience, the authors conducted open-ended interviews where a sample of Londoners, chosen to span age, gender and socioeconomic divides, were asked to share what came to mind when they considered research on the brain. These interviews were then examined and the themes touched upon quantified, and the results showed clear indication that neuroscientific research has largely failed to penetrate into the mindset of the public at large. Participants consistently indicated that they thought of brain research as a distant enterprise quite removed from them, performed by some unknown “other” (who was consistently described as a man in a white lab coat). Brain research was widely convolved with neurological medicine and brain surgery, and was almost entirely assumed to focus on medical application – the concept of basic science on cognition, emotion, or other mental phenomena appeared nearly unheard of.

Consistent with this, although most participants were quick to tag brain research as “interesting, they also reported that it was not of particular interest to them specifically except in the context of illness. That is, above all the brain was something that might go wrong, and unless it did participants gave it little thought at all. The authors connect this to an earlier concept of “dys-appearance,” the idea that much of the body is inconspicuous and ignored so long as it is healthy, and only attracts attention when there is some kind of dysfunction.

Based on these finding, O’Connor and Joffe concluded that despite rapid advancement and intrusion of neuroscience into more and more areas of inquiry, research on the brain nonetheless continues to have little relevance to the public’s daily lives. As they put it, “heightened public visibility should not be automatically equated with heightened personal engagement.”

So is neuroscience flooding our popular culture, or simply washing up then falling away like a rolling wave, never really lasting in our overall societal consciousness? For the moment, it appears to be both. Perhaps the concern over “neuromania” need not be so heated, but also perhaps we need to do more to understand how our work can take the extra step to become more relevant to those outside the lab.

So You Want to Be a… Freelance Medical Writer

By Elizabeth Ohneck, PhD

In the first post of our So You Want to Be a… series we talked to Elizabeth Ohneck about her career as a medical writer. This week Elizabeth interviewed Ginny Vachon who runs her own medical writing company, Principal Medvantage, to find out what it takes to go it alone and become a freelance medical writer.

 

What does a freelance medical/science writer do?

Medical writers can do many different types of writing, but in general, medical writing is centered on taking information and making it accessible and informative for the correct audience. For example, taking raw data and writing a manuscript for other physicians is really different than summarizing recent findings for the general public. Freelance medical writers are contractors, and can be called in by pharmaceutical companies, communications agencies, medical associations, or other groups to help with specific projects that can’t be handled ‘in house,’ for whatever reason. There’s a ton of variety and opportunity to learn about different diseases. Some freelancers specialize, and write mostly about certain medical areas, or for certain audiences.

 

How did you get where you are now?

I have a BA in Biology from Agnes Scott College and my PhD is from Emory University. As I was nearing the end of my PhD I realized I had no clue what I wanted to do next. I totally froze because I knew I had choices, but I didn’t know how to make the next step. I realized that before I could pick a direction, I needed to learn about all of the different things I could do and how the people who were doing those things spent their days. So, I joined Women in Bio Atlanta and started going to events held by Emory and by WIB. I went to a WIB event on women in business and I heard Emma Nichols, who owns Nascent Medical Communications (formerly Hitt Medical Writing), talk about her experiences as a freelance medical writer and entrepreneur. I spoke with her after the event, and ended up doing a number of projects for her. After getting some experience, I started my own company! She has a great podcast, medical writers speak, that is full of great information about both medical writing and the business side of freelancing. The American Medical Writer’s Association also has a great website, training course, and chapter meetings where you can meet other medical writers and take short courses.

 

What are the key skills needed to be successful at this job, and did you develop any of them during grad school?

I think that the most important thing is a willingness to tackle any subject and learn about it. I think that as a Ph.D. student, I learned that discomfort and anxiety are totally normal when learning something new, and usually happen right before you understand something! I also had my daughter during my third year of graduate school, and developing the level of organization that I needed to ‘do it all’ has been awesome.
Medical writing is really great in that you can get a little bit of experience as a contractor before you graduate. Even if you end up not being wild about medical writing, you have a new skill to set you apart. Who on earth doesn’t want to hire someone who is skilled in communicating complex ideas?

 

What would be your advice to a PhD wanting a similar job as yours?

I would say to listen to the Medical Writers Speak podcast, go to the AMWA website, and start developing samples, writing for a blog or university paper are great starts (the manuscript you wrote with your PI isn’t the best sample) I think a lot of people who are trying to break into medical writing have a hard time with the transition from being a scientist or physician who can write to being a writer who understands science. I think that it’s important to recognize that while obtaining an MD or PhD is really hard, it is only a piece of the puzzle. The thought of sharpening your writing skills should be an exciting one! I know I heard this said at a lot of ‘alternate career events,’ but what you do next should not be a ‘back-up plan,’ it should be an exciting new set of goals! Also, after doing a ton of lab work, I really had a hard time sitting all day. Now I have to be a lot more deliberate about exercise and working with my hands in other ways.

 

What are the top three things on your To Do list right now?

A typical day usually starts with assessing deadlines. I usually have a few projects going on at once, so organization is really important. Today I have to check in with a client who owes me a transcript of an interview, look over a manuscript I finished two days ago with ‘fresh eyes’ before sending it off, and do some bookkeeping (scanning receipts from a recent work trip out of town).

 

What are your favorite parts of the job? What are your least favorite or most challenging parts?

My favorite part is that I get to solve problems for clients. Usually I get called in when people are stretched thin. It’s nice to be able to help companies when they are growing. My least favorite thing is the sitting. I have a standing desk now, which helps, but I miss the constant motion of lab work.

 

Is there anything you miss about academia? What was the biggest adjustment in moving from the bench to your current position?

Yes, of course! I miss being an ‘expert’ in a scientific area. As a writer, I learn just enough about a subject to write well about it. I have totally lost money on jobs before because I get sucked into a topic and next thing I know I am well-versed in how a specific trial recorded adverse events, but it doesn’t matter because that wasn’t what I was supposed to be doing. Especially as a freelancer, it’s all about doing what needs to be done to complete a project. I miss the freedom of diving into a single sentence in a paper to figure out the nature of a problem. The hardest part about making the mental switch was understanding that my role is to produce clear and meaningful content, not to assist in guiding the direction of research or marketing, or whatever the problem is I am writing about. Again, the switch from being a scientist to a writer.

 

How do you see your field developing over the next ten years?

I think that the ways in which medical writers develop content over the next few years will change to include more interactive platforms. I expect that soon doctors and patients will be unsatisfied with brochures, which will not only seem old-fashioned, but be insufficient for the increasingly complex decision-making that accompanies personalized medicine. Probably medical writing will soon include more content for apps. I don’t know that the clinicians of tomorrow will put up with PowerPoint-based CME, or posters will remain paper-based and non-interactive. It is hard to predict how communication will change in ten years time, but I think the most flexible and willing to learn medical writers will be the most successful.

 

What kind of positions to people in your position move on to?

One of the coolest things about freelance medical writing is that it can serve as a grand tour of many different types of biomedical businesses. You get to work with many types of companies (big, small, growing, pharma, CROs, communications firms, medical associations – you name it). You also get to work with the people in a company and see what they are like and see many different styles of working (fast, slow, organized, totally insane – you name it!). You can really observe and learn about what suits you. Many companies who need freelancers also need an on-staff medical writer, or someone smart in medical affairs, or marketing, or communications. Showing up and being organized and pleasant can prompt a job offer.

 

And finally: In the event of a zombie apocalypse, what skills would a freelance medical writer bring to the table?

I could be sure that every conceivable population of clinicians is well aware of how to identify, appropriately treat, and report zombie-related medical events. In addition, all potential patient populations will be well aware of how to seek out specialists, should they experience symptoms. Because I’m a freelancer, I am available to handle any writing needs that crop up as various new anti-zombie therapies emerge.