Taking the Reigns – Part I

Neeley Remmers

Today I’m going to talk about a topic that affects everyone – from graduate students to post-docs to mentors. Mentoring. Mentoring is a key component in developing new scientists, yet as scientists embark on the faculty career pathway, no one takes the time to really teach effective mentorship even though this will become a significant aspect of one’s career as they develop into a primary investigator. Instead, the common theme throughout scientific history has simply been to train young scientists into clones that resemble their mentors. However, in today’s scientific world, we now have more scientists than ever before thanks to the increase in funding and admittance into graduate programs seen in the earlier part of this millennium and unfortunately, simply training young scientists to become clones of their mentors is no longer an effective mentoring strategy as there are many different job opportunities available for scientists today aside from the standard academic route.

Fortunately, there are scientists out there who are studying effective mentoring strategies and are trying to put together resources to teach effective mentoring strategies, but until this becomes an integral part in training young scientists, we as students and post-docs all are going to have to take a more active role our mentorship. After all, our mentors are only human and unfortunately lack the skill to read our minds to find out what kind of help we need and what is the best way to help each of us. The number one tool that can make an immediate impact in improving your relationship with your mentor is effective communication. However, before we get into effective communication strategies, we need to take a step back to be sure we first understand our own personalities, goals, and the personality of our mentor.

A couple weeks ago, I wrote a blog describing the many careers available to scientists and utilizing the MyIDP tool created by Science Careers. First, I highly recommend you make use of this tool so you can begin to visualize the career path you want to make for yourself. Once you know what area of science you want to build your career, you need to do a little research to find out what you can do while still in graduate school or your first post-doc position that can set you up to get an introductory position in that field. Once you have a plan worked out, you then should convey your career goals with your mentor so they can help you attain your goals in any way they can because regardless of what you may currently believe, your mentor really wants you to succeed…even if they use “tough love” to show it.

Now that you have completed step one and defined a career path, now it’s time to learn about your personality traits and how this may affect your communication skills with others and your success in the workforce. A very good tool for this is to take the Myers-Briggs personality test. This is not a test you can simply do online by yourself, but you need to either contact a MBTI center or suggest to your GSA or post-doc society to hold a workshop by a MBTI professional to administer the test and explain each of the personality traits. I recently attended one and it was like a HUGE lightbulb went off in my head. I was able to easily recognize the traits that applied to me, my mentor, and my colleagues and learned skills to better communicate with people who had opposite traits as me. Before I go farther, let me briefly explain the personality traits central to the MBTI personality types. There are four personality pairs: extroversion and introversion, intuition and sensing, thinking and feeling, and judging and perceiving. First, it is essential that I state that these personality traits are considered to be preferences meaning at this stage in your life you might test to be an introvert, but as you grow over the next 5 years your preferences may change to make you more of an extrovert. Additionally, you might always remain an introvert in nature; however, you may develop the skills of an extrovert to help you in certain situations. The easiest to understand is the first pair – extroversion and introversion – describes how one derives their energy and interacts with the world. As you can imagine, extroverts love to socialize with others, prefer to be around other people, and derive their energy from social interactions whereas an introvert gets their energy from being left alone to think things through. The second pair – intuition and sensing – relates to how one takes in information. For example, a sensor will look at an abstract picture and focus on the details that they can see within the picture whereas someone who is intuitive will look for hidden meanings and tends to think in terms of seeing the bigger picture. The third pair – thinking and feeling – relates to how you make decisions. A thinker relies on the facts regardless of the specific situation whereas a feeler treats each situation differently. For example, in terms of mentorship, a thinking mentor will treat everyone the same way whereas a feeling mentor will look at each person in their lab individually and tailor their interactions to each individual’s personality. The last pair – judging and perceiving – relate to how you live your outer life. For example, when planning a vacation, a judger will make a detailed itinerary listing what you will do each day and leave no room for spontaneity whereas the only plans a perceiver will make is to book the plane tickets and accommodations in advance and determine what they want to do each day as the day comes. For more thorough definitions of each personality trait, please visit the Myers Briggs Foundation website at: http://www.myersbriggs.org/.

In the next post, I’ll discuss how knowing your personality traits can help you improve your mentorship.

Cancer Is Contagious?

Sophia David

Most people know cancer as an aggressive yet non-infectious disease that fortunately is not passed from one person to another. There are, of course, infectious agents such as human papillomavirus virus (HPV) that can lead to cancer in some individuals, but barring these cases, we know you can’t “catch” cancer from another individual. However, evidence obtained in recent years challenges this paradigm. Here, I will discuss two fascinating examples of truly “transmissible” cancers in the natural world: one that occurs in Tasmanian devils and the other in dogs. I will then briefly consider the implications of this research for our own species– is it possible for human cancers to become transmissible?

 

During a person’s lifetime, cells will divide a staggering ten thousand million million times. This provides a lot of opportunities for mistakes to happen and thus somatic mutations do occur. Usually these are repaired or destroyed by tumor suppressor genes such as p53 or BRCA1 and do not cause major problems. However, if mutations occur in genes that regulate the cell division process, and such mutations accumulate, cancer can be the result.

 

As we know it, a cancerous clone of cells that arises in an individual remains restricted to this individual. These clones are often self-destructive by causing the death of their host. However, two types of cancer have overcome this constraint of existing only within one host and, remarkably, have acquired the ability to spread between different individuals. In this way, cancer clones can exist long after the death of the individual that gave rise to them.

 

The first known transmissible cancer is Devil facial tumor disease (DFTD), which plagues the Tasmanian devil population. Until about 500 years ago, Tasmanian devils inhabited all of Australia but hunting caused a massive decline in their population, which is now highly fragmented and restricted to Tasmania. To exacerbate this decline, DFTD was first reported in 1996 and is a highly virulent disease that causes tumors around the mouth and face of the animals. The disease has been killing large numbers of devils through starvation and there are fears that, at the current rate of population decline, Tasmanian devils will be extinct within 20 years.

 

When scientists searched for clues as to what is causing DFTD, the obvious culprit was a cancer-causing virus that is sweeping through the population. It came as a great surprise when, in 2006, a study was published in Nature which reported that no virus or other infectious agent is involved. Instead, the scientists reported that it is the cancer itself that is transmissible.

 

The authors had studied the karyotype (the number and appearance of chromosomes) of tumor cells from different individuals. Normally, Tasmanian devils have 14 chromosomes, including the X and Y chromosomes. However, all the tumor cells possessed the same karyotype, containing only 13 chromosomes with identical rearrangements. This provided strong evidence that the tumor cells arose in one individual and spread through the population.

 

Later, another study published in Science looked at the genetic sequences of tumor cells taken from different individuals. The authors found that the sequences were all highly related, as well as distinct from the normal devil sequences. miRNA expression profiling also showed that the tumor cells are very similar to brain cells, in particular Schwann cells, suggesting that the cells are of neural origin.

 

However, we know that cancers are not usually transmitted between individuals. A physical mode of transfer would be required and, in any case, foreign cells would normally be rejected by an individual’s immune system.  It turns out that devils have overcome both of these barriers. Firstly, Tasmanian devils are extremely aggressive animals, and fights between individuals can leave severe wounds on the head and face. During fighting, it is believed that cancerous cells can be released from ulcerated oral tumors and transmitted into the facial wounds of another devil. Secondly, the decline in devil numbers and resultant inbreeding has led to a population with extremely low genetic diversity. A lack of major histocompatibility complex (MHC) variation means that the DFTD cells are not always recognized as foreign by the immune system, a factor further facilitating cancer transmission.

 

The second known example of a transmissible cancer is one that contrasts with DFTD in a number of ways. It is called Canine Transmissible Venereal Tumor (CTVT) disease, and occurs in all breeds of dogs throughout the world but at a low frequency. Unlike DFTD, which is transmitted through fighting, CTVT disease is transmitted through sexual contact. The tumors grow as small nodules on sexual organs.

 

It is clear that CTVT disease is a better-adapted clone than DFTD. It exhibits very low virulence, a necessary condition for sexual transmission, and normally regresses without any treatment. This is in contrast to the highly virulent cancerous clone in devils, which could wipe out the entire population of devils leading to its own destruction. The likely reason for this difference between the two diseases is that CTVT disease is much older. CTVT disease is thought to have originated up to 65,000 years ago, possibly in a dog/wolf ancestor, giving the cellular clone a long period of time to adapt to its host. On the other hand, DFTD is thought to have arisen up to only 20 years ago, leaving little time for adaptation. So scientists are hopeful that, if the devil population survives, DFTD will also become less virulent over time.

 

An interesting question is, of course, should humans be worried about transmissible cancers? Unsurprisingly, there is very little evidence on this topic due to the unethical nature of potential experiments. However, there have been very occasional reports of transmission events. For example, one study in 1996 reported the accidental transmission of a malignant sarcoma from a patient by a surgeon when he injured his hand during an operation.  However, it is believed that such cases are extremely rare and that the risk of transmissible cancers is very low due to the high genetic diversity within human populations.

 

 

The Forest or the Trees? The Role of Single Glomeruli in Olfaction

Celine Cammarata

 

Headline-grabbing neuroscience often focuses on cells or regions that are active in particular situations, quick to claim that the location of some emotion or sensation has been tracked down.  But in many ways, what matters is not only what nucleus is excited, but how such activity comes together to form a code.  One particularly interesting system to explore neural coding is the olfactory system.

 

While we humans tend to rely heavily on vision, for many species the most informative stimuli are odors.  Mammals have a complex olfactory system in which odor receptor neurons on the olfactory epithelium each express only one type of odor receptor; in the olfactory bulb, all neurons expressing the same receptor converge on one of a few glomeruli, bundles of axon terminals from which olfactory information is passed on via the olfactory tract to the cortex and other higher brain areas.  Most scents activate various different receptors and thus a mix of different glomeruli, and so olfaction is generally thought of as working through a population code; that is, it is the combinatorial pattern of activated glomeruli that represents a given smell. But new research focuses on the information transmitted by individual glomeruli alone.

 

The fact that odors generally activate multiple receptors makes it difficult to stimulate individual glomeruli to investigate their properties. To get around this, researchers bred mice with channelrhodopsin encoded selectively in neurons expressing one particular odor receptor, M27.  Using an optic fiber implanted above the M27 glomerulus allowed “monoglomerular” stimulation with light.

 

It turns out mice can glean a fair bit of information from a single glomerulus.  For starters, even the activity in one glomerulus alone was enough to elicit the perception of odor; animals trained to lick when they detected scent quickly learned to lick in response to light stimulation.  Furthermore, mice could detect the stimulation of the M27 glomerulus even when simultaneously smelling odors, suggesting that although most odors are represented at the population level, changing even a single unit of this population is perceptible.

 

A task in which mice were asked to lick only for the stronger of two light stimulations revealed that animals were able to discriminate the intensity of activity in a single glomerulus.  Animals could also learn to lick only when stimulation occurred at certain points in the mouse’s sniffing cycle, indicating that individual glomeruli can transmit temporal information as well.

 

So what does all this mean for the study of olfaction and neural coding?  Importantly, while the present study used behavior, and thus by implication perception, as a readout, it is not yet precisely clear what the effects of a single glomerulus are on the downstream neurons. However, the current research does demonstrate that although olfaction primarily proceeds through population coding, even the single units comprising that population are individually capable of transmitting a range of meaningful information, suggesting an impressive degree of accuracy. In turn, this may help to explain how olfaction-dependent animals have such a remarkably discriminatory sense of smell.

Grow Your Own

Sally Burn

Imagine the scene: one of your descendants, sometime in the near future, in the aftermath of a particularly grisly altercation between their finger and a carving knife. Sans finger they dash over to the medicine cabinet, pop open a new tube of SOCK (Sox2, Oct4, c-Myc, Klf4. Copyright me, pharmaceutical companies of the future), rub it into the wound, and then wait for a new finger to grow. The finger is regenerated at its natural location from their own cells and is thus readily accepted by their body. Science fiction? Possibly, but a novel technique published this month in Nature may have laid the foundations to start exploring such regenerative therapies.

 

Maria Abad and colleagues, in the lab of Manuel Serrano in Madrid, have developed a method to reprogram adult cells into iPS (induced Pluripotent Stem) cells within a living mouse. These cells then differentiated in vivo into a number of tissue types, all without the need for any invasive or surgical action. To achieve this they took advantage of the cocktail of factors known to dedifferentiate adult cells into iPS cells: Oct4, Sox2, Klf4, and c-Myc. This combination was shown in 2007 to induce reprograming to a pluripotent state, acting like a “reset” switch. Over the last few years many research groups have taken advantage of this finding to create iPS cells from a variety of adult cell types in the lab. Experiments have also been conducted to show that the resulting iPS cells can then be differentiated into distinct tissue types both in a petri dish and when transplanted into animals. However, until now iPS cells have never been generated in vivo and then differentiated into tissues in their natural environment. The Madrid team generated transgenic mice in which expression of the genes encoding each reprograming factor can be induced using the antibiotic doxycycline. By transiently giving mice doxycycline they could turn on the genes and provide the animal’s own cells with the cues to become iPS cells.

 

The positive side of what happened next is that differentiated adult cells were indeed “reset” to become iPS cells. Moreover, at the transcriptome level these in vivo iPS cells were actually closer to ES cells than iPS cells generated in vitro and had features of totipotency. The in vivo iPS cells then went on to differentiate into a range of tissues. So far so good. However, the tissues they formed were within teratomas – a type of tumor containing multiple tissues that, whilst normal, are not supposed to be located in that part of the body. The teratomas occurred in multiple organs and contained tissues derived from all three embryonic germ layers, demonstrating that the in vivo iPS cells are pluripotent.

 

As you might expect the mice died fairly rapidly due to their tumors (within 6-10 weeks on one treatment protocol). There is therefore a lot of fine-tuning that needs to be done before such a technique could be used in humans. Firstly, a method to localize the reprograming to a specific organ or area will need to be developed. Moreover, we will need a way to control exactly which specialized cell types the iPS cells then differentiate into. A third issue is that the mice in this study were transgenic, with production of the four reprograming factors induced by doxycycline from artificially inserted transgenes. Genetically engineering humans is obviously rife with scientific and ethical hurdles. So another requirement is to find a way to produce the reprograming factors in the human body without the need for genetic modifications. This could conceivably be achieved by exogenously supplying the factors in the form of an injection or topical cream. Finally, the technique would need to be refined so that tumors don’t develop even at the treated site. These are pretty huge problems to address. However, this initial study has demonstrated that regenerative medicine may be more feasible than previously thought, by showing that in vivo reprograming is indeed possible.

 

Most Scientific Papers Contain Irreproducible Results. Can We Fix It?

Alisa Moskaleva

It’s the start of a new academic year. Every year at this time my department retreats to some scenic spot, welcomes new PhD students and post-docs, and shares our scientific achievements. This year, one of our faculty, Dr. Daria Mochly-Rosen, gave a talk entitled “How to Improve Robustness of Academic Data.” Its contents were an unpleasant shock and a call to action that I wanted to share with the audience of Scizzle.

Here is the unpleasant shock. Dr. Mochly-Rosen cited a Nature paper from March 2012, in which scientists at Amgen looking for new cancer drugs repeated experiments from 53 publications that reported cancer-related discoveries in cell lines or animal models. In many cases, they contacted the principal investigators and repeated the experiments with their advice and reagents and sometimes even in their laboratories. They could reproduce the critical, title-making result in only 6 publications, or just 11% of their sample. Yet, 21 of the 53 articles were published in supposedly high-quality journals with an impact factor of more than 20, and articles with irreproducible data were cited between 3 and 1909 times. The follow-up articles would expand on aspects of the irreproducible observation, legitimizing it without re-testing it. Amgen scientists noted that although many modern drugs were made possible by biomedical research, it’s been hard for drug companies to find scientific results they can bank on. To them, the scientific literature is not self-correcting and is not reliable to a disturbing extent.

But even if you don’t care about drug companies and their profit margins, 11% or even 30% or 50% reproducibility of scientific findings is bad. Even a single irreproducible result, especially in a high-impact journal, can misdirect the efforts of dozens of researchers. Reproducibility should be as close to 100% as possible, so that scientists do not waste time and money following up erroneous findings, and so that they can honestly say to the general tax-paying public that scientific results can be trusted.

Why are so few results reproducible? Setting aside deliberate scientific misconduct, perhaps the ultimate cause is pressure on researchers from funding agencies to produce many, preferably high-impact papers on tight schedules and budgets. What Amgen scientists observed were experiments that were performed only once, data sets that were cherry-picked to look nice, reagents that were used even when there was published evidence that they were not appropriate, positive and negative controls that were missing or not shown, and investigators that were not blinded to which of their samples was the control and which was the experiment. Again, it cannot be stressed enough that irreproducible results came from well-meaning, sloppy scientists, not nefarious, data-fudging ones.

This absence of malice is good news because it should be easier to persuade scientists to publish more reproducible results, once they realize the magnitude of the problem. As solutions go, Nature has published “Reproducibility: Six red flags for suspect work”  and then two more red flags to make it easier to find trustworthy scientific papers. I want to share Dr. Mochly-Rosen’s advice: publish every important detail of your method and every control, either in the main text or in that wonderful Internet-age invention, the Supplementary Materials. And, of course, do your science carefully. Wouldn’t it be wonderful to live in a world where no one has to waste time and money troubleshooting something from a cryptic Methods section that recursively refers to a Previous, Researcher et al? And wouldn’t it be equally wonderful to never use an antibody, or siRNA, or chemical inhibitor published to do one particular thing, only to find out that it has many side effects? I dare you to imagine what it would take to build a world where scientific results are reproducible, as they should be.

 

Piled Higher and Deeper: Bioinformatics

Neeley Remmers

I was perusing the table of contents of the current issue of Clinical Cancer Research, and saw an abstract for a paper entitled “Uncovering the Molecular Secrets of Inflammatory Breast Cancer Biology: An Integrated Analysis of Three Distinct Affymetrix Gene Expression Datasets” by Steven J Van Laere  et al. This particular paper looks at molecular signatures distinct to inflammatory breast cancer (IBC) by means of analyzing Affymetrix microarray data from 137 patients compared to 232 control patients that did not have IBC. After doing a lot of data mining with the help of the PAM50 algorithm, they did find a molecular signature unique to IBC versus normal patients though they would need to do similar comparisons to other forms of breast cancer to see if there are distinctions that set them apart. My initial reaction after reading this abstract and seeing how many patients they had to analyze and compare was a sense of being overwhelmed. In order to do clinical or translational research, you have to work with these large data sets to account for all the many variances that come with studying human samples, which means you also need to have a good understanding of and willingness to do bioinformatics.

Personally, I think it takes a special kind of person to do bioinformatics and, for that matter, biostatistics. If you are fortunate enough to work in an institution that has a bioinformatics and biostatistics core, consider yourself lucky. I recently have been honing my bioinformatics skills by analyzing RNA-sequencing data trying to figure out which activation and chemotaxis pathways in leukocytes are turned on upon treating them with my protein of interest. I had an appreciation for those who make a living in this field, but after countless hours in front of my computer creating different gene lists and analyzing them with Ingenuity I have an even greater appreciation for what bioinformaticians and biostatisticians do. My brain was not wired to understand or generate the many algorithms now available to help us perform these complex analyses and generate the statistics needed to validate the findings, but I applaud those who can. Personally, I think there should be a national bioinformatician/biostatistician appreciation day.

The Overdiagnosis Problem

Kelly Jamieson Thomas

Technological advancements over the past thirty years have provided us with the tools to screen for and detect cancer at an early stage. The goal of improved screening techniques is to identify early-stage cancer, treat the cancer before it progresses, and affect a decrease in mortality and morbidity. Although there has been an increase in early-stage cancer detection, there has not been a correlative decrease in later stage disease. For cancer screening to work effectively, it must identify cancers that are fast growing and that will result in patient death if left untreated. But, screening identifies all types of cancers—some fast growing, some slow growing, and some indolent (which are so slow growing that they would never cause patient harm). Improved cancer screening has saved many lives, but it has also resulted in overdiagnosis.

 

Overdiagnosis is the diagnosis of a disease that will not cause symptoms or death during a patient’s lifetime. In the case of cancer, many abnormalities detected at an early stage with screening may meet the current pathologic definition of cancer under the microscope, but the abnormality will never progress. Identification of these slow growing and indolent lesions results in overdiagnosis, overtreatment that includes unnecessary and painful treatments, and increasing healthcare costs.

 

By identifying several patterns in cancer incidence and mortality between 1975 and 2010, a period when cancer screening has increased dramatically, we can begin to understand where screening is most effective and where it results in overdiagnosis. It appears that screening, early detection, and removal of precancerous lesions in colon and cervical cancer has effectively resulted in a decrease in late-stage disease. Screening for breast cancer, prostate cancer, thyroid cancer, and melanoma seems to detect more cancers that are clinically insignificant. Screening for high-risk lung cancer, if adopted, may also follow this pattern.

 

Historically, fast growing cancers are not effectively identified via screening, but slow growing progressive cancers with a long latency and a precancerous lesion are ideal for detection with screening. As an example, both colonic polyps and cervical intraepithelial neoplasia are slow growing, and effectively treated when identified through screening. This type of analysis allows clinicians to effectively utilize screening techniques to reduce cancer morbidity and mortality.

 

Both the National Cancer Institute (NCI) and the authors of “Overdiagnosis and Overtreatment in Cancer: An Opportunity for Improvement”, published in the Journal of American Medical Association, highlighted the current problem of overdiagnosis and mapped out a path towards solving the problem. One proposal offered by the NCI was to alter cancer terminology to better reflect diagnosis. For an interview conducted on C-SPAN, Dr. Barnett Kramer, Director of the Division of Cancer Prevention for the NCI, stated, “One of the themes [of a 2012 NCI meeting] was that, unfortunately, the term cancer doesn’t fit all sizes. It is an outdated term, at least for some lesions, that doesn’t convey the mounting evidence that we have about the natural behavior of cancers, some of which don’t act like cancer at all.” We need to discard the 100 year-old cancer terminology we’ve been using and adopt new ways to characterize the “cancers” we’re detecting with sensitive screening technology. The term “cancer” should be used only when there is a high likelihood that lesions are lethal if left untreated and premalignant conditions should exclude the words cancer or neoplasia. There is precedent for changing the names of lesions when they don’t grow quickly—carcinoma in situ has been changed to cervical intraepithelial neoplasia, a terminology that more precisely characterizes the lesion.

 

Changes in current terminology may be coupled with the compilation of registries characterizing low malignant potential lesions. In order to create such a registry, molecular diagnostics that identify indolent or low-risk lesions must be adopted and validated. One possibility would be to classify cancers that are most likely indolent as IDLE (indolent lesions of epithelial origin). Prognosis of precancerous lesions is extremely complicated. Thus, both patients and clinicians would benefit from access to precise information regarding the risk of progression to invasive cancer. Dissemination of information and more exact terminology would aid physicians and patients in choosing treatments that may not include the most invasive intervention and help to mitigate overdiagnosis.

 

There is a clear need to design more sophisticated screening programs that are catered to the biology of detected disease. In order to accomplish this, we need improved characterization of disease biology, disease dynamics, and understanding of whether the cancer is indolent or aggressive. This is necessary to avoid overtreatment during which patients undergo unnecessary, expensive, and painful procedures. Clearly, the advancements we’ve made in early cancer detection has saved many lives, but with the changing landscape of our technology, we need to address the potential negative effects of overdiagnosis and overtreatment in order to protect patients from undergoing unnecessary treatment. Moving forward, cancer screening will be best utilized to identify conditions that are more likely associated with morbidity and mortality so that patients with these types of lesions may be properly and swiftly treated. In order to effect these changes, a multidisciplinary effort across the medical, advocate, pathology, imaging, and surgical communities will be necessary.

 

Every Postdoc Should Take Control of Their Career: Here’s How

Tara Burke

[highlight]Stay informed on changes to NIH training policies and career opportunities[/highlight]

 

This week is National Postdoctoral Appreciation Week! Congratulations to those who have made it through graduate school and are continuing their research training.  The postdoctoral fellowship is a time to further develop your bench skills and fine-tune your investigative prowess.  It’s also a time to seriously consider the next step in your career. However, this task may seem daunting since postdoctoral scientists often feel tied to the bench with little time and support to pursue career options. To add to this, federal budget cuts are severely affecting academic research funding (with the possibility of more cuts looming) and the prospect of an academic career is becoming unattainable for more and more postdoctoral fellows. While federal budget cuts are not the only reason for the precarious job situation, it certainly has exacerbated the problem, bringing it to the forefront. The decline of tenure-track and other academic positions is trickling down to trainees, pumping out large numbers of postdoctoral fellows who are fighting for fewer and fewer jobs.

 

Most recently, the NIH has created a new website to update researchers and the public on new policies implemented to address the problems trainees are facing. Some of the new initiatives taken by the NIH include establishing a grant program to nurture innovative training proposals, tracking all trainees that receive federal funding, from undergraduates to postdocs, and improve graduate and postdoctoral researcher training. These changes are a step in the right direction towards updating an antiquated system. However, some argue, that the NIH has glossed over the big problems facing the biomedical training infrastructure. Additionally, a recent survey from an organization in the U.K. urges postdoctoral fellows to take more responsibility for their career development. Most likely a combination of increased support and guidance by the institution and a more proactive role by the postdocs themselves will result in fellows that are more prepared for the ever-changing job climate.

 

How do postdoctoral trainees avoid becoming victim to the uncertain job market? In addition to taking full advantage of the career services at your institution as early as possible, a highly recommended first step is to join your associated scientific society, the National Postdoctoral Association (NPA) and/or other associations. I especially recommend joining The American Association for the Advancement of Science (AAAS). (AAAS has a special deal right now where if you join AAAS or are already a member you can join the NPA for only $20). It’s an excellent way to stay abreast of the funding situations, changes in career trends/opportunities, and participate in funding advocacy. All scientific societies have a membership fee, however the fees are often lower for graduate students and postdocs. Also, some of the membership fees, such as NPA fees, are tax deductable. Furthermore, staying connected to their services is key. Therefore, I further recommend joining their LinkedIn group or following them on Twitter to maximize your exposure to their services and information. Joining these societies is a great starting point for those interested in the scientific community outside of their lab or institution and, better yet, their content can easily be explored at your computer while you are spinning your samples or treating your cells!

How Tumors Prosper in the Brain

Celine Cammarata

Glioblastomas, one of the most potent and prominent forms of tumor found in the brain, appear to be organized in a hierarchy, with stem cell-like Brain Tumor Initiating Cells (BTICs) dividing to produce the cells that compose the bulk of the tumor.  New research indicates that these cancerous ring leaders are reliant on a particular molecule, a glutamate transporter that allows them to out-compete other cells and flourish in challenging environments.

 

In a phenomenon known as the Warburg effect, cancer cells tend to shift their metabolic processes away from oxygen-dependent mechanisms in favor of anaerobic respiration.  Because anaerobic processes are less efficient, tumors require particularly high levels of glucose to meet the energy demands of their perpetual growth.  But, not only do glioblastomas live in the brain, where the blood brain barrier makes it particularly tough to obtain glucose from the vascular system, but indeed BTICs seem to thrive in areas where low vascular supply is low.

 

The cells appear to derive this resilience from the Glut3 glutamate transporter, whose affinity for glucose is about five time that of the more common Glut1 transporter.  Glut3 is fairly common in neurons, who also face the challenge of glucose acquisition in the brain, but is especially enriched in BTICs: protein levels of Glut3 in BTICs are 300% of those seen on non-BTIC glioblastoma cells.

 

This impressive glucose supply system allows BTICs to out-compete other cells for glucose.  Consequently, prevalence of BTICs was increased when dissociated GBM cells were exposed to reduced glucose, due to the combined effects of BTICs selectively surviving the low-glucose environment, and non-BTICs taking on BTIC characteristics to adapt.  That is, low glucose availability actually triggered increased levels of the cells which appear to be the source of tumors.  Moreover, reduced glucose also increase the ability of glioblastoma cells to promote tumor growth in vivo when grafted into healthy mice.

 

It seems that these armies of Glut3 play an essential role that is specific to BTICs.  RNA interference hindering Glut3 molecules had a much greater effect on the glucose intake of BTICs than other glioblastoma cells, exemplifying their particular importance to these initiators.  Furthermore, RNA interference greatly reduced tumor growth, and increased survival, when BTICs were implanted in vivo.

 

The role of Glut3 and the resilience it confers on BTICs appears to have clinical weight as well.  In multiple data sets, Glut3 expression in brain tumor patients was correlated with higher grade tumors and shorter survival time.  In addition, several other cancers which are also thought to involve tumor initiating cells showed similar correlations to Glut3.  Because such initiators are key to tumor formation, this revelation of the crucial importance of one molecule  may provide valuable insight for therapy.

Antibiotics and Your Gut

Alisa Moskaleva

Taking antibiotics can make you feel worse instead of better, and it has nothing to do with antibiotic resistance. Antibiotics not only attack your infection but also kill some of the beneficial bacteria in your intestines. These beneficial bacteria keep at bay such pathogenic bacteria as Salmonella typhimurium that causes stomach flu and Clostridium difficile that triggers a nasty variety of diarrhea. For most people, antibiotics do more good than harm, and their beneficial bacteria recover within a few days, but the Center for Disease Control and Prevention in the U.S. estimates that C. difficile contributes to 14,000 American deaths every year. How exactly beneficial bacteria protect you from pathogenic bacteria was not known, and in a recent paper published in Nature  Sonnenburg et al. used mice to figure out how this might work. They not only explained how protection from pathogens by beneficial bacteria might work in general, but also identified one specific molecule involved, that can be studied to come up with new drugs against S. typhimurium and C. difficile.

 

Think of your intestines as a buffet for bacteria. At every meal, your body delivers the food you consumed, pre-processed by your teeth and stomach for easy access to nutrients. And even between meals, the lining of your intestines is covered in yummy mucus that bacteria can break down for nourishment. Normally, there are many different kinds of bacteria in your intestines that are in balance with each other and with your body. Antibiotics kill some of these bacteria and upset the balance. Sonnenburg et al. investigated what happens to S. typhimurium and C. difficile under these conditions.

 

To establish themselves in the body and cause disease, S. typhimurium and C. difficile have to compete with beneficial bacteria for nutrients. Sonnenburg and colleagues hypothesized that antibiotics kill off some of the competition and free up a source of nutrients for S. typhimurium and C. difficile. Indeed, they found increased levels of sialic acid, an energy-rich molecule that comes from intestinal mucus, in streptomycin-treated mice compared to untreated mice. Further, they mutated S. typhimurium and C. difficile to make them unable to digest sialic acid, and found that these mutants did not grow as well as normal S. typhimurium and C. difficile inside streptomycin-treated mice. And normal C. difficile produced more mRNA for genes involved in sialic acid metabolism when infecting streptomycin-treated mice than when infecting untreated mice. Because all of these experiments used mice with normal beneficial bacteria, they are the most relevant for drawing parallels with humans. The results strongly suggest that streptomycin kills beneficial bacteria that normally eat sialic acid, and the extra sialic acid feeds S. typhimurium and C. difficile, but Sonnenburg and colleagues went further.

 

The most conclusive evidence that sialic acid is a nutrient that S. typhimurium and C. difficile exploit came from experiments with special mice. The researchers started with mice that had no bacteria in their intestines, beneficial or otherwise. They then added bacterium Bacteroides thetaiotaomicron, a naturally occurring beneficial intestinal bacterium, and found that mice infected with B. thetaiotaomicron had more sialic acid in their intestines than bacteria-free mice, presumably because B. thetaiotaomicron breaks down intestinal mucus to form sialic acid but then can’t eat it. When they further added S. typhimurium and C. difficile, the pathogens thrived. Then, they mutated B. thetaiotaomicron to make it unable to produce sialic acid, and, as expected, S. typhimurium and C. difficile didn’t do so well. S. typhimurium and C. difficile also didn’t grow well in mice that received Bacteroides fragilis, a relative of B. thetaiotaomicron that can both produce and metabolize sialic acid. However when mice infected with mutant B. thetaiotaomicron unable to produce sialic acid were fed extra sialic acid, S. typhimurium and C. difficile once again thrived in them. By working with mice that had just one bacterium, B. thetaiotaomicron or B. fragilis, in their intestines, instead of the normally occurring complex mix, Sonnenburg et al. showed that S. typhimurium and C. difficile can grow to disease-causing numbers by eating sialic acid.

 

Sialic acid is unlikely to be the whole story. Sonnenburg et al.used only streptomycin, and other antibiotics probably have different effects on beneficial intestinal bacteria. Also, they found that in the presence of B. thetaiotaomicron, S. typhimurium increases production of mRNA for genes that metabolize not only sialic acid but also fucose and potentially other nutrients found in the intestines. They did some experiments with fucose, but, perhaps because C. difficile can’t metabolize fucose, they were not as thorough as with sialic acid. And of course, it remains to be seen whether what happens in mice also happens in humans. Nonetheless, this study suggests that taking some sort of drug to decrease the amount of sialic acid or just general nutrients in the intestines along with antibiotics may help prevent S. typhimurium and C. difficile infections. And it showed that a diverse and balanced community of intestinal bacteria prevents pathogens from taking hold just by not leaving them anything to eat.