What A Marshmallow Can Say About Your Brain

By Deirdre Sackett

In the 1970s, researchers at Stanford University performed a simple experiment. They offered children the chance to eat a single marshmallow right now, or wait 15 minutes to receive two marshmallows. Out of 600 children in the study, only about ⅓ were able to wait long enough for two treats. Most attempted to wait, but couldn’t make it through the whole 15 minutes. A minority of kids ate the marshmallow immediately.

 

Feeding marshmallows to children in the name of science may seem like a waste of federal funds. But it turns out that the ability to wait for a treat can actually predict a lot about someone’s personality and life trajectory.

 

Since the 70s, many scientific groups have repeated the “marshmallow test” (some of which have been hilariously documented). In some iterations, researchers recorded whether each child chose an immediate versus delayed treat, and then tracked the children’s characteristics as they grew up. Amazingly, the children’s choices predicted some important attributes later on in life. Generally, the more patient children who waited for the bigger reward would go on to score higher on the SAT, have a lower body mass index (BMI), and were more socially and cognitively competent compared to the kids who couldn’t wait and immediately ate one treat.

 

The “marshmallow test” measures a cognitive ability called delay discounting. The concept is that a big reward becomes less attractive (or “discounted”) the longer you need to wait for it. As such, delay discounting is a measure of impulsivity – how long are you willing to wait for something really good, before choosing a quicker, but less ideal, option?

 

While it’s okay to occasionally have spur-of-the-moment choices, poor delay discounting (increased impulsivity) is often a symptom of problematic gambling, ADHD, bipolar disorder, and other mental health issues. In particular, drug addiction is also accompanied by increased impulsive choices. For instance, drug users will choose immediate rewards (such as drugs of abuse) over delayed, long-term rewards (i.e., family life, socializing, or jobs). Drug users are poor at delay discounting and choose immediate options faster than non-drug users. This isn’t just a human flaw; exposing rats to cocaine also increases their impulsivity during delay discounting tasks.

 

Interestingly, aspects of the “marshmallow test” hint at this impulsivity-drug addiction link. In 2011, researchers did a follow-up study with the (now adult) children from the original 1970’s Stanford experiment. The scientists imaged the subjects’ brains while making them do a delayed gratification task in which they had to wait for a reward. They found that patient versus impulsive individuals had very different activity in two specific brain regions involved in drug addiction.

 

Firstly, the study found that impulsive individuals had greater activity in the ventral striatum, a brain region heavily linked to drug addiction and impulsivity. The greater activity in this region may imply that impulsive individuals process information about rewards differently than patient individuals. That is, the way their brain is wired may cause them to want their rewards right now.

 

Secondly, the impulsive individuals had less activity in the prefrontal cortex, which is responsible for “putting on the brakes” for impulsive actions. This finding suggests that impulsive individuals may not have that neural “supervisor” that can stop themselves from acting on their impulses. Drug addicts show similarly reduced prefrontal activity. So in addition to doing worse on standardized tests, having higher BMIs, or being less socially competent, the marshmallow test predicts that impulsive individuals may have brain activity similar to those of drug users.

 

While it seems like a silly experiment, the marshmallow test is a great starting point to help increase our understanding of impulsivity. Using this information, researchers can start to develop treatments for impulsive behavior that negatively affects people’s lives. Specifically, treating impulsivity in drug addicts could help as part of the rehabilitation process. So think about that the next time you reach for that sweet treat!

 

Escape from Exhausting Learning and Insufferable Experiences: Sleep May Do the Trick

 

By Yue Liu

A 2004 film, 50 First Dates, depicted a romantic story about how the hero won the love every day from the heroine, who suffered from a fictional form of anterograde amnesia and lost her previous day’s memories after every single night. In reality, a unique form of human amnesia, sharing great similarities with that afflicting the heroine in 50 First Dates, was reported in 2010: after a car accident, the patient FL could recall events that had happened before the accident and remember things from the same day after the crash, but she could not register memories in her brain from the previous day after a night’s sleep. What had happened to those memories during her sleep?

The role of sleep in memory has been stated in a “two-stage” model: During the day, we temporarily store a remarkable amount of information in the hippocampus, a brain area named for the structural resemblance to the seahorse. While we sleep, the hippocampus gradually gets disengaged, and memories are handed over to the neocortex for long-term storage. In brief, we consolidate our reminiscences during sleep by transferring them from the hippocampus to the neocortex. If the transferring process during sleep is disrupted, as may be the case of the patient FL, temporary memories will be lost, whereas permanent memories that are already stored in the neocortex will remain intact.

In this month’s issue of Nature Neuroscience, Michaël Zugaro’s lab in France provided the first direct evidence for this two-stage model of memory. They observed a fine temporal coupling of oscillating activities between the hippocampus and neocortex in animals during deep sleep. When the animals’ learning periods (20 minutes) were long enough to trigger memory consolidation, the oscillatory coupling between the hippocampus and neocortex during sleep became stronger. However, when the learning periods (3 minutes) were too short, the strength of the hippocampo-cortical coupling did not increase; thus, the memories could not be consolidated. Interestingly, in the latter animals, boosting the hippocampo-cortical dialogue during sleep promoted memory consolidation, which otherwise would not have happened due to the short learning period.

This study offered the first causal link between the hippocampo-cortical dialogue during sleep and memory consolidation. It may also invigorate a fantasy: Can we learn much more quickly (in 3 rather than 20 minutes)? Can we study less during the day and receive a special electrical therapy during the night that can selectively enhance the hippocampo-cortical oscillatory coupling? Someday, an electrical device may be hooked up to a human brain to monitor and record electrical activities associated with various experiences. We may program the device to tighten the hippocampo-cortical coupling during the night for a specific experience, to strengthen that particular memory.

How about erasing a particular memory during sleep? In another 2004 film, Eternal Sunshine of the Spotless Mind, finding their relationship did not work out, a couple turned to a special procedure, which wiped out their memories about each other during sleep while their romantic episodes replayed. The basis of this fantasy procedure may be the vulnerability of memories while they are replayed during sleep. Human imagination may propel scientists to develop a strategy that can make erasing memories possible in reality. Someday, we may ease some insufferable emotional pain, such as that resultant from posttraumatic stress disorder (PTSD), by disrupting the replay of fear, stress, or anxiety associated memories.

When science and technology can make it possible to easily save and delete our memories, we may escape from laborious learning and unpleasant memories just by clicking “save” or “delete” on electrical devices connected to our brains. But remember, our memories sculpt who we are. After this technological intervention, will you still be you?

 

 

Beyond Neuromania

By Celine Cammarata

As someone within the field, it seems to me that neuroscience – in some form or another – appears in the media nearly every day. Indeed the term “neuromania”, originally coined by Raymond Tallis, has come into use to describe both the lofty claims made about the power of neuroscience to answer nearly every question and the general mainstream media frenzy surrounding the field. Scholars have paid increasing attention to this and it is often regarded as a problem, but more recent work suggests that despite the mania, neuroscience is still not widely understood or even considered by the public at large. So does all the hype conceal a true lack of public interest?

It’s undeniable that neuroscience is the target of extensive and potentially problematic media attention. In a 2012 Neuron editorial, O’Connor, Reese and Joffe examined the coverage of neuroscience-related topic in six UK newspapers from 2000-2010 and found that not only did the number of articles related to brain research nearly double from the early to the late 2000s, but the topics also changed and the implications of neuroscience research was often exaggerated. Whereas in the past neuroscience was generally reported on in relation to physical pathology, the authors found that in their sample the most common context for discussing neuroscience was that of brain enhancement and protection – topics that are both more widely applicable to a broad audience and that suggest a newly emerging sense of ownership over ones’ brain.  O’Connor et al describe that “although clinical applications retained an important position in our sample, neuroscience was more commonly represented as a domain of knowledge relevant to ‘‘ordinary’’ thought and behavior and immediate social concerns. Brain science has been incorporated into the ordinary conceptual repertoire of the media, influencing public under- standing of a broad range of events and phenomena.”

Such issues are also highlighted in Satel and Lilienfeld’s 2013 book Brainwashed: the Seductive Appeal of Mindless Neuroscience, in which the authors explore – and lament – the at times unrestrained application of functional magnetic resonance imaging (fMRI) to answer questions from “Pepsi or Coke?” to “does free will exist?”. The tantalizing ability to see the brain has carried neuroscience into the realms of marketing, politics, law and more, not to mention changing the way we think about more standard brain research topics such as addiction. But, the authors point out, pictures of a brain alone can not address every level of analysis and are not inherently of greater scientific value than are other research methodologies. Tracking the physical footprint of a desire, attitude, or propensity in the brain does not in and of itself tell you why or how these things emerged, nor can it be necessarily used to assign guilt, decide what is a “disease” and what not, or determine how people choose their politicians – and yet this is precisely what neuroscience is often touted to do.

Both of these works, and many others, are based on the premise that neuroscience has become markedly pervasive, nearly omnipresent. Fascinatingly, though, the brain craze seems to stop short of making a final leap from the media to public consciousness. To be sure, public interest in neuroscience does exist – someone must be buying the growing number of brain-centered books popping up at Barnes and Nobel, right? – but a 2014 paper by the same authors as the Neuron piece found that the public in general is not nearly so interested in neuroscience as the media frenzy and emergence of the brain in societal matters might suggest.

To probe how everyday citizens think about neuroscience, the authors conducted open-ended interviews where a sample of Londoners, chosen to span age, gender and socioeconomic divides, were asked to share what came to mind when they considered research on the brain. These interviews were then examined and the themes touched upon quantified, and the results showed clear indication that neuroscientific research has largely failed to penetrate into the mindset of the public at large. Participants consistently indicated that they thought of brain research as a distant enterprise quite removed from them, performed by some unknown “other” (who was consistently described as a man in a white lab coat). Brain research was widely convolved with neurological medicine and brain surgery, and was almost entirely assumed to focus on medical application – the concept of basic science on cognition, emotion, or other mental phenomena appeared nearly unheard of.

Consistent with this, although most participants were quick to tag brain research as “interesting, they also reported that it was not of particular interest to them specifically except in the context of illness. That is, above all the brain was something that might go wrong, and unless it did participants gave it little thought at all. The authors connect this to an earlier concept of “dys-appearance,” the idea that much of the body is inconspicuous and ignored so long as it is healthy, and only attracts attention when there is some kind of dysfunction.

Based on these finding, O’Connor and Joffe concluded that despite rapid advancement and intrusion of neuroscience into more and more areas of inquiry, research on the brain nonetheless continues to have little relevance to the public’s daily lives. As they put it, “heightened public visibility should not be automatically equated with heightened personal engagement.”

So is neuroscience flooding our popular culture, or simply washing up then falling away like a rolling wave, never really lasting in our overall societal consciousness? For the moment, it appears to be both. Perhaps the concern over “neuromania” need not be so heated, but also perhaps we need to do more to understand how our work can take the extra step to become more relevant to those outside the lab.

Ripples in The Pond: Psychological Interventions can Spread to the Whole Group

 

By Celine Cammarata

In light of frightening outbreak of preventable diseases like measles, the impact that an individual can have on the community in terms of biological intervention – in this case, immunization – has become pressingly clear. Less obvious is that analogous ideas may apply to psychological treatments: a recent paper in the journal Psychological Sciences reports that an intervention to fight stereotype threat among minority middle schoolers actually changed the academic outcomes of their entire classrooms.

The authors’ findings stemmed from two previous studies in which 7th grade students in a largely lower- and middle-class middle school engaged in affirmative writing exercises designed to combat stereotype threat – the fear of confirming negative stereotypes about a group one belongs to. In both original experiments, students (regardless of race) were randomly assigned to an experimental or a control condition; all students completed short writing assignments in class, with those in the experimental group prompted to write about their most important values and those in the control asked to write about their least important values. Writing about important values was hypothesized to combat stereotype threat and associated stress, and thus foster higher academic achievement.

 

In both original experiments, these hypotheses appeared to be supported, with students in the experimental condition achieving significantly higher final grades than those in the control condition if those students were African American. In line with the assumption that European Americans would be suffering negligible reduction in potential achievement due to stereotype threat, no effect of the intervention vs. control was seen among these racial majority students (although a small number of students of other races participated in the experiments, the authors focused on African American vs. European American children).

 

In the present paper, the same authors reanalyzed data from these two experiments, but now asked whether, independently from the individual impacts seen, the density of African American experimental-condition students in a classroom had any impact on the performance of students in that classroom as a whole. Although the original experiments had already demonstrated that among European American students the experimental group did no better than the control, it remained possible that on average all students benefitted by some having had positive impact from the intervention. The authors used the difference in number of African American students who had been in the experimental vs. the control group, multiplied by the percentage these two groups of students together times the total proportion of students in the classroom who had participated in the study at all; in an attempt to quantify the possible presence of a “cluster” of treated students (i.e. African Americans in the experimental group).

 

The results indicated that, above and beyond the impacts at an individual level on those students who received the experimental intervention, the density of treated students was strongly predictive of final grades throughout the class room. This in turn lead to the exciting conclusion that the benefits of the psychological intervention were somehow transferring from the treated individuals to others in their environment – revealing a previously unappreciated, and potentially very meaningful, ecological power of this comparatively small intervention.

 

So how were effects from the treated students spreading to their classmates? The authors ruled out direct impacts of stereotype threat reduction; for instance, it did not appear to be the case that the improved academic performance of African American students who received the intervention in turn reduced negative stereotypes felt by other African Americans, because the benefits of treatment density in a classroom were spread to other students regardless of race. Furthermore, a separate experiment on subliminal stereotyping did not suggest a general reduction in stereotype presence and consequent stereotype threat. Instead, it appears that the bolstering the treated students’ academic success – rather than how that bolstering was achieved – may have driven the transmissible benefits. The authors suggest that these students’ higher performance might have changed behavioral norms in the classroom in ways that fostered success; additionally, where the treatment was experienced by students who had previously been struggling and who were subsequently able to boost their performance, this may have freed up teachers to focus on other struggling students, thus improving performance overall. This was supported by the finding that the treatment density in a classroom had the greatest positive impact on students in the lower end on academic performance.

 

While much remains to be clarified about the mechanisms, this paper provides exciting evidence that targeted psychological interventions can result in significant ecological changes: from an epicenter of treated individuals, benefits can spread to everyone like ripples in a pond.

Double Strand Breaks For The Win

 

By Rebecca Delker, PhD

The blueprint of an organism is its genome, the most fundamental code necessary for life. The carefully ordered – and structured – composition of As, Ts, Cs and Gs provides the manual that each cell uses to carry out its diverse function. As such, unintended alterations to this code often produce devastating consequences, manifesting themselves in disease phenotypes. From mutations to insertions and deletions, changes in the sequence of nucleotides alter the cell’s interpretation of the genome, like changing the order of words in a sentence. However, arguably one of the most threatening alterations is the double-strand break (DSB), a fracture in the backbone of the helical structure, splitting a linear piece of DNA in two, as if cut by molecular scissors. While the cell has a complex set of machinery designed to repair the damage, this process can be erroneous generating deletions, or even worse, translocations – permanently reordering the pages of the manual and ultimately transforming the cell. Given the central role translocations can play in oncogenic transformation, DSBs have understandably received a bad rap; but, as can be expected, not all is black and white and it’s worth asking whether there is an upside to DSBs.

 

One such commendable pursuit of the DSB serves to expand the capabilities of our genome. While it is true that the genome is the most basic code necessary for life, many of the processes within a cell actually require changes to the code. These can occur at all levels of the Central Dogma – modifications of proteins, RNA, and even DNA. B- and T-lymphocytes, cells that provide a good amount of heft to our immune system, are notable for their DNA editing skills. Tasked with protecting an organism from billions of potential pathogens, B- and T-cells must generate receptors specific for each unique attack. Rather than encoding each of these receptors in the genome – an impossibility due to size restrictions – B- and T-lymphocytes use DSBs to cut and paste small gene fragments to build a myriad of different receptor genes, each with a unique sequence and specificity (reviewed here). For immune cells, and for the survival of the organism, these DSBs are essential. Although tightly controlled, DNA rearrangements in immune cells are mechanistically similar to the sinister DSB-induced translocations that promote cancer formation; however, rather than causing disease, they help prevent it.

 

New research published this summer points to exciting, and even more unusual uses of DSBs in the regulation of gene expression. In a quest to understand the molecular effects of DSBs that are causally linked to a variety of neurological disorders, Ram Madabhushi, Li-Huei Tsai and colleagues instead discovered a necessary role for DSBs in the response of neurons to external stimulus. To adapt to the environment and generate long-term memories, changes in the “morphology and connectivity of neural circuits” occur in response to neuron-activation. This synaptic plasticity relies on a rapid increase in gene expression of a select set of early-response genes responsible for initiating the cascade of cellular changes needed for synaptogenic processes. In their paper published in Cell this summer, the authors reveal that the formation of DSBs in the promoter of early-response genes induces gene expression in response to neuron stimulation.

 

By treating neuronal cells with etoposide, an inhibitor of type-II topoisomerase enzymes (TopoII) that causes DSB formation, the researchers expected to find that DSBs interfere with transcription. In fact, most genes found to be differentially expressed in cells treated with the drug showed a decrease in expression; however, a small subset of genes, including the early-response genes, actually increased. Through a series of in vivo and ex vivo experiments, the researchers showed that even in the absence of drug treatment, DSB formation in the promoters of early-response genes is critical for gene expression – beautifully weaving a connection between neuronal activation, DSB formation and the rapid initiation of transcription in this subset of genes.

 

The serendipitous discovery of the positive effect of etoposide on gene expression lead the researchers to focus in on the role of topoisomerases, the guardians of DNA torsion, in DSB formation. As a helical structure composed of intertwined strands, nuclear processes like replication and transcription cause the over- or under-twisting of the DNA helix, leading the DNA molecule to twist around itself to relieve the torsional stress and form a supercoiled structure. Topoisomerases return DNA to its relaxed state by generating breaks in the DNA backbone – single-strand breaks by type I enzymes and DSBs by type II – untwisting the DNA and religating the ends. While etoposide can artificially force sustained DSBs, physiological TopoII-induced breaks are typically too transient to allow recognition by DNA repair proteins. The finding that TopoIIb-induced DSBs at the promoters of neuronal early-response genes are persistent and recognized by DNA repair machinery suggests a non-traditional role for TopoII enzymes, and DSBs, in transcription initiation and regulation.

 

In fact, the contribution of TopoII and DSBs in the regulation of neuronal genes may not be so niche. Another study published recently found a similar relationship between transcriptional activation and Topo-mediated DSB formation. Using the primordial cells of the germline in C. elegans as a model system, Melina Butuči, W. Matthew Michael and colleagues found that the abrupt increase in transcription as embryonic cells switch from a dependence on maternally provided RNA and protein to activation of its own genome induced widespread DSB formation. Amazingly, TOP-2, the C. elegans ortholog of TopoII is required for break formation; but, in contrast to neuronal activation, these DSBs occur in response to transcription rather than as a causative agent.

 

These recent studies build upon a growing recognition of a potentially intimate relationship between DSBs, torsion and transcription. DNA repair proteins, as well as topoisomerase enzymes have been shown to physically interact with transcription factors and gene regulatory elements; topoisomerase I and II facilitate the transcription of long genes; and, as in neuronal cells, studies of hormone-induced gene expression in cell culture reveal an activation mechanism by which TopoIIb induces DSBs selectively in the promoters of hormone-sensitive genes. Thus, DSBs may constitute a much broader mechanism for the regulation of gene-specific transcription than previously thought.

 

Given the grave danger associated with creating breaks in the genome, it is curious that the use of DSBs evolved to be an integral component of the regulation of transcription – an inescapable and ubiquitously employed process; however, as we expand our understanding of transcription to include the contribution of the higher-order structure of DNA, the utility of this particular evolutionary oddity comes into focus. Genomic DNA is not naked, but rather wrapped around histone proteins and packaged in the 3D space of the nucleus such that genomic interactions influence gene expression. Changes in the torsion and supercoiling of DNA have been associated with histone exchange, as well as changes in the affinity of DNA-binding proteins for DNA. In addition, the necessity of topoisomerase for the transcription of long genes occurs early as RNA polymerase transitions from initiation to elongation, suggesting that the role of TopoI and II is not to relieve transcription-induced torsion, but rather to resolve an inhibitory, likely 3D, genomic structure that is specific to genes of longer length. A similar mechanism may be involved at the neuronal early-response genes. In these cells, genomic sites of TopoIIb-binding and DSB-formation significantly overlap binding sites of CTCF – a crucial protein involved in genomic looping and higher-order chromatin structure – and, again, DNA breaks may function to collapse a structure constraining gene activation. Whatever the exact mechanisms at play here, these results inspire further inquiry into the relationship between DSBs, genome topology and transcription.

 

A cell’s unique interpretation of the genome via distinct gene expression programs is what generates cell diversity in multicellular organisms. Immune cells, like B- and T-lymphocytes, are different from neurons, which are different from skin cells, despite working from the same genomic manual. In B- and T-cells, DSBs are essential to piece together DNA fragments in a choose-your-own-adventure fashion to produce a reorganization of the manual necessary for cell function. And, as is emphasized in this growing body of research, DSBs function along with a variety of other molecular mechanisms to highlight, underline, dog-ear, and otherwise mark-up the genome in a cell-specific manner to facilitate the activation and repression of the correct genes at the correct time. Here, DSBs may not reorder the manual, but, nevertheless, play an equally important role in promoting proper cell function.

 

Neurodevelopment and the Health-Wealth Gap

 

By Danielle Gerhard

 

The famous Roman poet Virgil said that the greatest wealth is health. But what if your actual wealth affects your access to health?

 

It is estimated that more than 45 million Americans, or 14.5% of the population, live below the poverty line, according to the most recent Census Bureau survey. Although slightly lower than previous years, the poverty rate for children under 18 is still startlingly high: 19.9%. Poverty dictates how an individual lives their life and most importantly, what resources they have easy access to. Proper nutrition, environmental stimulation, basic healthcare, and family nurturing are all resources shown to aid healthy development yet are lacking in low-income communities.

 

An individual’s zip code is considered to be as much of a risk to one’s health as their genetics. Dr. Melody Goodman of Washington University in St. Louis researches the contribution of social risk factors to health disparities in local communities. One particular area in St. Louis, known as the Delmar Divide, is a stark example of how location is predictive of education and health. To the south of Delmar Boulevard is a largely white community with an average income of $47,000 and 67% of residents having a bachelor’s degree. Directly north of Delmar Boulevard is a predominantly African American community with a lower average income of $22,000 and only 5% of residents have a bachelor’s degree. In addition to income and education following the so-called Delmar Divide, health is also negatively affected. Higher rates of cancer, heart disease and obesity are only a few of the diseases plaguing these neglected, low-income neighborhoods.

 

Because our brains are rapidly developing during childhood, this leaves them more vulnerable to stress and environmental changes. Recently scientists have extended their efforts to better understand the long-lasting effects of income and environment on the brain and behavior. There have been a number of studies that look at the behavioral consequences of growing up in disadvantaged families, including increased risk for behavioral disorders, developmental delays, and learning disabilities. Fewer human studies have looked into the long-lasting effects of childhood poverty on brain regions known to be critical for executive function, attention and memory. Two studies published recently attempt to investigate this very question using a large-scale, longitudinal design in children between 3 and 20 years of age coming from different socioeconomic backgrounds.

 

One longitudinal, multi-site study published in JAMA Pediatrics investigated whether or not childhood poverty caused significant structural impairments in brain regions known to be important for academic performance. Key regions targeted in the study include the frontal lobes, involved in behavioral inhibition and emotion regulation, the temporal lobes, important for language and memory, and the hippocampus, a region shown to be critical for long-term memory as well as spatial and contextual memory. Demographic information and neuroimaging data was collected from nearly 400 economically diverse participants who were controlled for potential confounding factors such as health problems during or after pregnancy, complicated medical histories, familial history of psychiatric disorders, and behavioral deficits.

 

As hypothesized, children raised in low-income families had lower scores on the Wechsler Abbreviated Scale of Intelligence (WASI), which measures intelligence via verbal and performance IQ, and the Woodcock-Johnson III Tests of Achievement (WJ-III), a test for math skills and reading comprehension. Anatomically, children raised in low-income families showed reductions in gray matter (or volume – where most of the brain’s cells are housed), in the frontal and temporal lobes as well as in the hippocampus, with the largest deficits seen in children living well below the federal poverty line.

 

Another study recently published in Nature Neuroscience reported similar findings. The authors investigated whether poverty, defined by a parent’s education level and income, is predictive of neurodevelopmental deficits in key brain regions. As hypothesized, income is related to structural impairments in brain regions important for reading, language, and other executive skills. Similar to the study published in JAMA Pediatrics, this study found the strongest interaction in children from the poorest families.

 

These studies highlight the importance of access to beneficial resources during childhood and adolescence and how income and environment can drastically affect the trajectory of health and development of brain regions key to success into adulthood. A number of different programs for social change that are guided by empirical data and public policy are being implemented in disadvantaged communities. Sending healthcare workers out of the clinic and into these communities is a step in the right direction. However, some clinicians argue that this is unsustainable and instead advocate taking further steps towards training individuals who live in these communities and/or have healthcare providers move into these communities.

 

Furthermore, initiatives focusing on children and adolescents, in particular, could prevent more problems, possibly irreversible ones, from occurring down the road. Interventions directed towards reducing income inequality, improving nutrition, and increasing access to educational opportunities could drastically redirect a child’s trajectory into adulthood. Early education programs targeting children aged 3-5 years of age have been shown to improve future education attainment and earnings as well as reduce crime and adult poverty.

 

An unhealthy, broken social support system nurses an unhealthy, broken environment in disadvantaged regions lacking basic resources. Scientific knowledge can help direct public policy initiatives towards programs that could have greater impacts on society. A continued dialogue among scientists, politicians, and community activists is vital to the health not only of the children growing up in low-income communities but arguably to the health of our society as a whole. Solely placing funds and resources towards ameliorating adult poverty is akin to placing a band-aid on the problem. Today’s children are tomorrow’s adults, thus helping today’s children help’s tomorrow’s adults.

What's Keeping You Up at Night?

If you want to sleep, turn off your electronic device.

The light-emitting devices might be keeping you awake!

 

By Jesica Levingston Mac Leod, PhD

 

It is well established by now that staring at your phone, iPad or computer screen before going to sleep may delay your “real sleeping” time. The continued exposure to light excites the receptors in your eyes and therefore your brain, sending the signal that you must stay awake longer. This might not be a problem if you enjoy laying around in bed, tossing from side to side, but most people have to get to work early or have other commitments that haunt them the morning after a bad night of sleep. Insomnia is actually a serious disease; the lack of mindful dreaming can have a negative effect in your daytime life, and can result in poor performance at work. A recent study, published in the SLEEP journal, showed that reducing sleep from 8 hours to 4 hours makes memories less accessible in stressful situations.

 

Last December, a study in Boston added more evidence to the hypothesis that blue light negatively affects the secretion of melatonin, the hormone that helps regulate sleep and wake cycles. Dr. Chang and collaborators published in PNAS that the use of blue light emitting electronic devices before bedtime reduces a person’s alertness and interferes with their circadian rhythm. In this basic study they compared the effects of reading from a light emitting device verses from a paper book. They found that these e-readers delayed sleep for up to an hour compared to the old-fashioned paper books.

 

A recent study, published in the Journal of Biological Rhythms also tried to answer the question: can access to artificial light modify our sleeping patterns? Their answer was YES, it does! Sounds pretty legit, right?

 

Dr. De la Iglesia and collaborators studied two native communities in the north of Argentina: the Tobas and the Qom. These two indigenous communities share similar sociocultural and ethnic heritage, but one difference between them is that only the Tobas have access to electricity. Therefore, the Qom community regulates its lifestyle with natural light, like our ancestors before the almighty Mr. Edison’s invention.

 

The researchers provided the participants from both communities with motion-tracking wristbands to follow their activity during both summer and winter seasons. They found that in the summer season the Tobas had a tendency to get less daily sleep, about 43 min per day, than those living under natural light conditions. Not surprisingly, this was due to a later daily bedtime and sleep onset in the community with electricity, but a similar sleep offset and rise time in both communities. In the winter, the Qoms slept around 56 min per day more than those with access to electricity, and this was also related to earlier bedtimes and sleep onsets than the Tobas. They concluded: “The access to inexpensive sources of artificial light and the ability to create artificially lit environments must have been key factors in reducing sleep in industrialized human societies.”

 

But reading the conclusion you learn something else: the Toba community had TVs. This caused them to stay awake even later.

 

How do you get that pleasant sleep? To listen to lullabies… soft melodies ranging from 60 to 80 beats per minute. Take a warm bath, if body temperature drops before bedtime. Another option is to pay extra attention to your breath: focusing on how air moves through your body can relax you and can reduce stress. My favorite solution is to meditate! At least try – a lot of people accidentally fall asleep while trying to meditate anyways ;).

 

If you are a device-addicted insomniac, at least decrease the brightness of your screen. Tonight, have a nice encounter with Morpheus and remember that the rest of the human race will appreciate not dealing with a cranky sleepless person tomorrow.

 

A Micro Solution to a Macro Problem?

 

By Danielle Gerhard

Recent estimates by the National Institute for Mental Health (NIMH) have found that approximately 25% of American adults will experience a mental illness within a given year. Individuals living with a serious mental illness are more likely to develop a chronic medical condition and die earlier. In young adults, mental illness results in higher high school drop out rate. A dearth of effective medications leaves many individuals unable to hold a job, causing America a $193 billion loss in earnings per year. These saddening statistics shed light on the need for better drugs to treat mental illness.

 

Traditionally, treating a mental illness like depression, anxiety or schizophrenia involves a delicate and perpetually changing combination of drugs that target levels of neurotransmitters in the brain. Neurotransmitters are chemicals produced by the brain and used by cells to communicate with one another. Drugs used to treat mental illness either increase or decrease the release, reuptake or degradation of these chemicals from the cell. The current paradigm is that the disease solely results from neurotrasmitter imbalance. Therefore, research has predominantly focused on the specific types of cells that release them. However, neurons make up approximately 50% of all cells in the human brain. The other 50% of brain cells are glial cells and are responsible for maintaining and protecting the neurons in the brain and body.

 

One type of glial cell, microglia, are specialized macrophage-like immune cells that migrate into the brain during development and reside there throughout life. Microglia are the primary immune cells in the brain and act as first-responders, quickly mounting responses to foreign pathogens and promoting adaptive immune actions. Microglia can adapt to changes in their microenvironment by protracting or retracting their processes to maintain neuronal health, scavenging their surroundings for dead neurons and cellular debris. Moreover, it has been shown that microglia are involved in the induction and maintenance of long-term potentiation, an event that is critical for synaptic plasticity underlying learning and memory. Only in the past decade or so has this cell type begun to surface as a potential mediator in the development and continuation of mental illness. As a result of decades of neuron-focused experiments, the function of microglia have either been misunderstood or over-looked all together. Two recently published experiments contradict our conventional understanding of the etiology of mental illness.

 

A new study published in the January 29th issue of the scientific journal Nature Communciations by Dr. Jaime Grutzendler’s team at Yale University highlights a novel role for microglia in Alzheimer’s Disease (AD). Late-onset AD is thought to result from the accumulation of the protein β-amyloid (Αβ). This process is referred to as plaque aggregation and results from reduced Aβ plaque clearance. Because microglia with an activated morphology are found wrapped around areas of high Aβ accumulation, it has been hypothesized that they actually contribute to weakened neuronal projections by releasing small neurotoxic proteins, cytokines, that affect cell communication. Aβ can exist as mature and inert fibrillar Aβ but can also revert back to an intermediatary state, protofibrillar Aβ, which is toxic to neurons.

 

Dr. Grutzendler’s lab set out to to further investigate the role of microglia in Aβ plaque expansion with respect to the different forms of Aβ. Using two-photon imaging and high-resolution confocal microscopy, the team at Yale was able to show that, for the most part, microglia formed tight barriers around Aβ plaques with their processes, but in some instances microglia left plaque “hotspots” exposed. These plaque “hotspots” were associated with greater axonal and neuronal damage.

 

These findings indicate that microglia generated protective barriers around Aβ plaques that served to protect neurons from the neurotoxic effects of protofibrillar Aβ. Of note, studies using aged mice revealed that microglia were less effective at surrounding plaques leading to increased neuronal damage. Microglia regulation decreases with age thereby rendering neurons more vulnerable to environmental insults. This cell type is therefore a likely key mediator of neuronal death that leads to cognitive decline and emotional distrubances in patients suffering from AD and other neurogegenerative diseases.

 

Another recently published study highlights a novel role of microglia in addiction, a chronic disease that afflicts many individuals with mental illness, comes from Dr. Linda Watkins, of the University of Colorado, Boulder. The study, published in the February 3rd issue of the scientific journal Molecular Psychiatry, examines the role of microglia in the rewarding and reinforcing effects of cocaine.

 

It has long been understood that drugs of abuse cause activation of the dopamine (DA) system in the brain, with increased DA release from the ventral tegmental area (VTA) to the nucleus accumbens (NAc), a brain region important for their rewarding effects. Cocaine achieves this effect by blocking dopamine transporters (DATs) on the cell, resulting in increased levels of synaptic DA and sustained neuronal activity. Therefore, efforts have focused on targeting DATs to prevent the rewarding effects of cocaine and ultimately reduce addiction.

 

In addition to these established dogmas, recent studies have shown that cocaine also activates the brain’s immune system. Microglia express Toll-like receptor 4 (TLR4) and its binding protein MD-2, which are important for reconizing pathogens and activating the release of pro-inflammatory molecules such as interleukin-1β (IL-1β). Using an animal model of addiction in combination with in silico and in vitro techniques, Dr. Watkin’s team found that cocaine activates the TLR4/MD-2 complex on microglia, resulting in an upregulation of IL-1β mRNA in the VTA and increased release of DA in the NAc. Administration of the selective TLR4 antagonist (+)-naloxone blocked the cocaine-induced DA release and the rewarding effects of cocaine administration in the rodent self-administration behavioral models. Overall, the study concludes that TLR4 activation on microglial cells contributes to the rewarding and reinforcing properties of cocaine. Thus, drugs targeting this system could provesuccessful in treating addiction.

 

Through these studies and similar reports, it is becoming apparent that mental illness is more than a chemical imbalance in the brain and therefore shouldn’t be studied as such. The two studies highlighted in this article show the diverse role of microglia in the development and maintenance of mental illnesses. A more in-depth understanding of how this cell type interacts with already identified neural systems underlying mental disorders could result in the development of better-tailored drug design.

 

Winter’s Sleep: Insights Into Neurodegeneration

 

By Susan Sheng

Winter has come for most places in North America, and for many creatures that means settling in somewhere and hibernating until the weather warms up. During hibernation, a number of physiological changes occur, such as decreased metabolic rates and lowered core temperatures, in order to conserve energy. Interestingly, the brains of hibernators also undergo morphological changes; specifically, scientists have shown that there is a loss of synaptic protein clustering in hibernating animals, and that upon rewarming to normal body temperatures, these synapses can be rapidly reformed. This process of synapse dismantling and reformation has been proposed as a model of adult synaptic plasticity.

Synapse loss is a hallmark of neurodegenerative diseases, so a group of UK scientists decided to investigate the mechanisms underlying synapse dismantling and reassembly in hibernating animals could give insight into the maintenance and subsequent loss of synapses in models of neurodegenerative disease.

First, Peretti and colleagues showed that laboratory mice demonstrated synaptic dismantling in the hippocampus with artificial cooling to core temperatures of 16-18C and subsequent reassembly with rewarming, similar to that of other small hibernators. Mechanistically, Peretti and colleagues linked an RNA binding protein, RBM3 (RNA-binding motif protein 3) to synaptic reassembly. RBM3 has previously been shown to be upregulated in hypothermic conditions and have neuroprotective effects. It plays a role in promoting global protein synthesis, and is expressed in both neurons and glia (Chip et. al., 2011). Here, Peretti and colleagues showed that RBM3 is upregulated following artificial cooling, and this upregulation persisted for up to 6 weeks in wild-type mice.

What is interesting is that RBM3 seems to be dysregulated in two mouse models of neurodegenerative disease: the 5xFAD model of Alzheimer’s disease, and prion disease (tg37+/- mice infected with Rocky Mountain Laboratory prions). Both models have a delayed onset of synaptic loss and associated behavioral and learning deficits under normal conditions (around 4 months in the 5xFAD model, and 7 weeks post infection in the prion model). In both models, prior to the onset of the disease symptoms animals that were cooled and rewarmed showed synaptic structural plasticity and upregulated RBM3 levels similar to that of wild-type mice. However, animals that were in the disease-stage showed a failure to reassemble synapses upon rewarming, as well as a failure to upregulate RBM3 after cooling. However, efforts to boost RBM3 levels, either through early therapeutic cooling to boost endogenous protein levels or through viral overexpression, showed a rescue of these structural deficits, and consequent behavioral changes. Additionally, viral knockdown of RBM3 accelerated disease progression. In all, it appears that RBM3 is important in synapse structure, either in the formation of new synapses or perhaps in the maintenance of existing ones.

In humans, therapeutic cooling is already used in certain clinical settings, such as cardiac arrest, stroke, traumatic brain/spinal injury, and neonatal encephalopathy, with varying amounts of evidence as to its efficacy. A 2009 study in human Alzheimer’s disease patients has shown that RBM3 mRNA is significantly downregulated compared to age-matched controls. It could be interesting to see whether RBM3 is a viable drug target for human treatments, given the effects of RBM3 overexpression in the rodent models. Alternatively, depending on when the downregulation of RBM3 occurs relative to disease progression, it could be a predictor or a diagnostic marker for the onset of Alzheimer’s disease.

The “Big Data” Future of Neuroscience

 

By John McLaughlin

In the scientific world, the increasingly popular trend towards “big data” has overtaken several disciplines, including many fields in biology. What exactly is “big data?” This buzz phrase usually signifies research with one or more key attributes: tackling problems with the use of large high-throughput data sets, large-scale “big-picture” projects involving collaborations among several labs, and heavy use of informatics and computational tools for data collection and analysis. Along with the big data revolution has come an exploding number of new “omics”: genomics, proteomics, regulomics, metabolomics, connectomics, and many others which promise to expand and integrate our understanding of biological systems.

 

The field of neuroscience is no exception to this trend, and has the added bonus of capturing the curiosity and enthusiasm of the public. In 2013, the United States’ BRAIN Initiative and the European Union’s Human Brain Project were both announced, each committing hundreds of millions of dollars over the next decade to funding a wide variety of projects, directed toward the ultimate goal of completely mapping the neuronal activity of the human brain. A sizeable portion of the funding will be directed towards informatics and computing projects for analyzing and integrating the collected data. Because grant funding will be distributed among many labs with differing expertise, these projects will be essential for biologists to compare and understand one another’s results.

 

In a recent “Focus on Big Data” issue, Nature Neuroscience featured editorials exploring some of the unique conceptual and technical challenges facing neuroscience today. For one, scientists seek to understand brain function at multiple levels of organization, from individual synapses up to the activity of whole brain regions, and each level of analysis requires its own set of tools with different spatial and temporal resolutions. For example, measuring the voltage inside single neurons will give us very different insights from an fMRI scan of a large brain region. How will the data acquired using disparate techniques become unified into a holistic understanding of the brain? New technologies have allowed us to observe tighter correlations between neural activity and organismal behavior. Understanding the causes underlying this behavior will require manipulating neuronal function, for example by using optogenetic tools that are now part of the big data toolkit.

 

Neuroscience has a relatively long history; the brain and nervous system have been studied in many different model systems which greatly range in complexity, from nematodes and fruit flies, to zebrafish, amphibians, mice, and humans. As another commentary points out, big data neuroscience will need to supplement the “vertical” reductionist approaches that have been successfully used to understand neuronal function, by integrating what has been learned across species into a unified account of the brain.

 

We should also wonder: will there be any negative consequences of the big data revolution? Although the costs of data acquisition and sharing are decreasing, putting the data to good use is still very complicated, and may require full-time computational biologists or software engineers in the lab. Will smaller labs, working at a more modest scale, be able to compete for funds in an academic climate dominated by large consortia? From a conceptual angle, the big data approach is sometimes criticized for not being “hypothesis-driven,” because it places emphasis on data collection rather than addressing smaller, individual questions. Will big data neuroscience help clarify the big-picture questions or end up muddling them?

 

If recent years are a reliable indicator, the coming decades in neuroscience promise to be very exciting. Hopefully we can continue navigating towards the big picture of the brain without drowning in a sea of data.