For a Healthier 2018!

Dancing together is good for your health!

 

By Jesica Levingston Mac leod, PhD

 

Social dancers know the amazing feeling that a synchronized dance could bring. When your follower or leader is connected and it feels like you are one mind and body following the music, it is mystical and magical… Well, it turns out that synchronized dancing is also good for your health. I started dancing salsa because a good friend was going crazy about it and she recommended it, this inspired me to join a class. At this point I was a solitary belly dancer only following in team dances where you have choreography and if you are coordinated enough you feel this celestial connection with the other dancers…but without any physical contact.

On the other hand, in social dances like salsa, bachata, tango, zouk or swing, the connection is the base of a good dance. Nobody wants to be the person stepping to the left when 5 other dancers moved to the right while performing in a stage in front of hundreds of people, as well as nobody enjoys turning to the wrong side for misreading your dance partner lead, or watching how a follower does a completely different step that the one the leader indicated. Furthermore, being “in sync” with the group or your direct dance partner may help to improve your health, science says. In a nutshell, a recent study found that synchronizing with others while dancing raised pain tolerance and encouraged people to feel closer to others.

This year, Dr. Burzynska et al., at Colorado State University, separated 174 healthy adults, 60s to 79 years old, who had no signs of memory loss or impairment, into 3 activity groups: walking, stretching and balance training, or dance classes. The activities were carry on for 6 months and three times a week, those in the dance group practiced and learned a country dance choreography. Brain scans were done on all participants and compared with scans taken before the activities began. Not surprisingly, the participants in the dancing group performed better and had less deterioration in their brains than the other groups. Their most recent study published in November: “The Dancing Brain: Structural and Functional Signatures of Expert Dance Training” showed that dancers’ brains differed from non-dancers’ at both functional- and structural-levels. Most of the group differences were skill-relevant and correlated with objective laboratory measures of dance skill and balance. Their results are promising in that long-term, versatile, combined motor and coordination training may induce neural alterations that support performance demands.” (link 2)

Moreover, It is well established that dancing-based therapies are providing outstanding results in the treatment of dementia, autism and Parkinson’s. Indeed, dance therapy improves motor and cognitive functions in patients with Parkinson’s disease. Dancing was suggested to be a powerful tool to improve motor-cognitive dual-task performance in adults. Dance movement therapy has known benefits for cancer patients’ physical and psychological health and quality of lifeAnother study by Domane and collaborators, working with a cohort of overweight and physically inactive women, showed that Zumba fitness is indeed an efficacious health-enhancing activity for adults. Park also concluded that “a 12-week low- to moderate-intensity exercise program appears to be beneficial for obese elderly women by improving risk factors for cardiovascular disease”.

Dancing helps generate positive connections with others and this is one of the evolutionary reasons you are “called” to the dance floor when a song you like starts playing, and probably you will start your dance by coordinating with or copying others. Probably this behavior signaled tribe membership for early humans and also got couples together in a more romantic way, creating emotional bonds. Coordinated dances are as old as music, and distributed in a lot of different cultures, for example, the nowadays Hakka, used by rugby players, was a native group dance that intimidates rival tribes.

Talking about the chemistry of dancing, as any other exercise, it releases endorphins (the hormones of happiness and pain relief). For example, a study from the University of London were anxiety-sufferers enrolled in one of four settings: exercise class, a music class, a math class and a dance class, showed that only the last group displayed “significantly reduced anxiety.”

In the most recent study done in the same London University by Tarr and collaborators, the researchers used pain thresholds as an indirect measure of endorphin realize (more endorphins mean we tolerate pain better) for 264 young people in Brazil. The volunteers were divided into groups of three, and they did either high or low-exertion dancing that was either synchronized or unsynchronized. The high exertion moves were standing, full-bodied movements, on the other hand, in the low-exertion groups did small hand movements sitting down. They measured the before and after feelings of closeness to each other via a questionnaire and their pain threshold by attaching and inflating a blood pressure cuff on their arm, and determining how much pressure they could stand.

Most of the volunteers who did full-bodied exertive dancing had higher pain thresholds compared with those who were in the low-exertion groups. Most importantly, synchronization led to higher pain thresholds, even if the synchronized movements were not exertive. Therefore when the volunteers saw that others were doing the equivalent movement at the same time, their pain thresholds increased.

The results also showed that synchronized activity encouraged bonding and closeness feelings more than unsynchronized dancing. Therefore, “Dance which combined high energy and synchrony had the greatest effects. So the next time you find yourself in an awkward Christmas party or at a wedding wondering whether or not to get up and groove, just do it”, claims Dr. Tarr.

Coming back to the dance floor, I had reached out for an opinion about the wellness of dancing to the best Bachata DJ: Brian el Matatan: “I enjoy the dancing for a few reasons. There’s the enjoyment & challenge of using what I’ve learned; socially as well as choreographed performance. Also, there is the rush of endorphins similar to “runner’s high”. There’s also the socializing aspect of dancing. It’s like having a conversation without speaking.” Well said DJ!
He also offered some advice for followers: dance with many different types of leaders if you’d like to improve your following. There are many different leads, and there is an experience to be gained in social dancing that would not be gained via dance class. Also, feel free to ask a leader to dance, & be courteous in how you decline a dance. Most importantly- communicate. Don’t “lead” a leader into thinking their lead is better than what it really is- for your sake & that of your fellow followers. For example, if he almost ended your life with that risky move, let him know so that he doesn’t try it on you or anyone else again (at least not without figuring out how to do the move properly). And some advice for leaders: be VERY  courteous in how you ask for a dance, try to not take rejection personally, be patient with follows who may not be on the same skill level as you, & don’t almost end her life with risky moves.

Lastly, I asked for the most sensual dancer, scientist, and project manager –  Debbie McCabe – for her advice for followers. She commented “The lady’s job is to surrender and connect to her partner…it is a 3-minute love affair and energy exchange. I love Bachata because I can get out of my head and just feel, express my sensuality, be playful and connect… it balances out my left brained day job.”

More than 20 years ago, scientists found a connection between music and enhancement of performance or changing of neuropsychological activity involving Mozart’s music from which the theory of “The Mozart Effect” was derived. The basis of The Mozart Effect lies at the super-organization of the cerebral cortex that might resonate with the superior architecture of Mozart’s music. Basically listening to Mozart K.448 enhances performance on spatial tasks for a period of approximately 20 min.

So dear reader, please stop complaining and making excuses and just dance! Or at least listen to music, as the outstanding jazz singer Tamar Korn once told me when I was in distress “music heals”.

 

This post was originally published on Dec 30, 2015 and was updated with new research on Dec 12, 2016 and on Dec 19, 2017.

The Fake Drug Problem

 

By Gesa Junge, PhD

Tablets, injections, and drops are convenient ways to administer life-saving medicine – but there is no way to tell what’s in them just by looking, and that makes drugs relatively easy to counterfeit. Counterfeit drugs are medicines that contain the wrong amount or type of active ingredient (the vast majority of cases), are sold in fraudulent packaging, or are contaminated with harmful substances. A very important distinction here: counterfeit drugs do not equal generic drugs. Generic drugs contain the same type and dose of active ingredient as a branded product and have undergone clinical trials, and they, too, can be counterfeited. In fact, counterfeiting can affect any drug, and although the main targets, particularly in Europe and North America, have historically been “lifestyle drugs” such as Viagra and weight loss products, fake versions of cancer drugs, antidepressants, anti-Malaria drugs and even medical devices are increasingly reported.

The consequences of counterfeit medicines can be fatal, for example, due to toxic contaminants in medicines, or inactive drugs used to treat life-threatening conditions. According to a BBC article, over 100,000 people die each year due to ineffective malaria medicines, and overall, Interpol puts the number of deaths due to counterfeit pharmaceuticals at up to a million per year. There are also other public health implications: Antibiotics in too low doses may not help a patient fight an infection, but they can be sufficient to induce resistance in bacteria, and counterfeit painkillers containing fentanyl, a powerful opioid, are a major contributor to the opioid crisis, according to the DEA.

It seems nearly impossible to accurately quantify the global market for counterfeit pharmaceuticals, but it may be as much as $200bn, or possibly over $400bn. The profit margin of fake drugs is huge because the expensive part of a drug is the active ingredient, which can relatively easily be replaced with cheap, innate material. These inactive pills can then be sold at a fraction of the price of the real drug while still making a profit. According to a 2011 report by the Stimson Center, the large profit margin combined with comparatively low penalties for manufacturing and selling counterfeit pharmaceuticals make counterfeiting drugs a popular revenue stream for organized crime, including global terrorist organizations.

Even though the incidence of drug counterfeiting is very hard to estimate, it is certainly a global problem. It is most prevalent in developing countries, where 10-30% of all medication sold may be fake, and less so in industrialized countries (below 1%), according to the CDC. In the summer of 2015, Interpol launched a coordinated campaign in 115 countries during which millions of counterfeit medicines with an estimated value of $81 million were seized, including everything from eye drops and tanning lotion to antidepressants and fertility drugs. The operation also shut down over 2400 websites and 550 adverts for illegal online pharmacies in an effort to combat online sales of illegal drugs.

There are several methods to help protect the integrity of pharmaceuticals, including tamper-evident packaging (e.g. blister packs) which can show customers if the packaging has been opened. However, the bigger problem lies in counterfeit pharmaceuticals making their way into the supply chain of drug companies. Tracking technology in the form of barcodes or RFID chips can establish a data trail that allows companies to follow each lot from manufacturer to pharmacy shelf, and as of 2013, tracking of pharmaceuticals throughout the supply chain is required as per the Drug Quality and Security Act. But this still does not necessarily let a customer know if the tablets they bought are fake or not.

Ingredients in a tablet or solution can fairly easily be identified by chromatography or spectroscopy. However, these methods require highly specialized, expensive equipment that most drug companies and research institutions have access to, but are not widely available in many parts of the world. To address this problem, researchers at the University Of Notre Dame have developed a very cool, low-tech method to quickly test drugs for their ingredients: A tablet is scratched across the paper, and the paper is then dipped in water. Various chemicals coated on the paper react with ingredients in the drug to form colors, resulting in a “color bar code” that can then be compared to known samples of filler materials commonly used in counterfeit drugs, as well as active pharmaceutical ingredients.

Recently, there have also been policy efforts to address the problem. The European Commission released their Falsified Medicines Directive in 2011 which established counterfeit medicines as a public health threat and called for stricter penalties for producing and selling counterfeit medicines. The directive also established a common logo to be displayed on websites, allowing customers to verify they are buying through a legitimate site. In the US, VIPPS accredits legitimate online pharmacies, and in May of this year, a bill calling for stricter penalties on the distribution and import of counterfeit medicine was introduced in Congress. In addition, there have also been various public awareness campaigns, for example, last year’s MHRA #FakeMeds campaign in the UK,  which was specifically focussed on diet pills sold online, and the FDA’s “BeSafeRx” programme, which offers resources to safely buying drugs online.

In spite of all the efforts to raise awareness and address the problem of fake drugs, a major complication remains: Generic drugs, as well as branded drugs, are often produced overseas and many are sold online, which saves cost and can bring the price of medication down, making it affordable to many people. The key will be to strike the balance between restricting access of counterfeiters to the supply chain while not restricting access to affordable, quality medication for patients who need them.

We need to talk about CRISPR

By Gesa Junge, PhD

You’ve probably heard of CRISPR, the magic new gene editing technique that will either ruin the world or save it, depending on what you read and whom you talk to? Or the Three Parent Baby, which scientists in the UK have created?

CRISPR is a technology based on a bacterial immune defense system which uses Cas9, a nuclease, to cut up foreign genetic material (e.g., viral RNA). Scientists have developed a method by which they can modify the recognition part of the system, the guide RNA, and make it specific to a site in the genome that Cas9 then cuts. This is often described as “gene editing” which allows disease-causing genes to be swapped out for healthy ones.

CRISPR is now so well known that Google finally stopped suggesting I may be looking for “crisps” instead, but the real-world applications are not so well worked out yet, and there are various issues around CRISPR, including off-target effects, and also the fact that deleting genes is much easier than replacing them with something else. But, after researchers at Oregon Health and Science University managed to change the mutated version of the MYBPC3 gene to the unmutated version in a viable human embryo last month, the predictable bioethical debate was reignited, and terms such as “Designer Babies” got thrown around a lot.

A similar thing happened with the “Three Parent Baby,” an unfortunate term coined to describe mitochondrial replacement therapy (MRT). Mitochondria, the cells’ organelles for providing energy, have their own DNA (making up about 0.2% of the total genome) which is separate from the genomic DNA in the nucleus, which is the body’s blueprint. Mitochondrial DNA can mutate just like genomic DNA, potentially leading to mitochondrial disease, which affects 1 in 5000-10000 children. Mitochondrial disease can manifest in various ways, ranging from growth defects to heart or kidney to disease to neuropsychological symptoms. Symptoms can range from very mild to very severe or fatal, and the disease is incurable.

MRT replaces the mutated mitochondrial DNA in a fertilized egg or in an embryo with the healthy version provided by a third donor, which allows the mitochondria to develop normally. The UK was the first country to allow the “cautious adaption” of this technique.

While headlines need to draw attention and engage the reader for obvious reasons, oversimplifications like “gene editing” and dramatic phrases like “three parent babies” can really get in the way of broadening the understanding of science, which is difficult enough as it is. Research is a slow and inefficient process that easily gets lost in a 24-hour news cycle, and often the context is complex and not easily summed up in 140 characters. And even when the audience can be engaged and interested, the relevant papers are probably hiding behind a paywall, making fact checking difficult.

Aside from difficulties communicating the technicalities and results of studies, there is also often a lack of context in presenting scientific studies – think for example of chocolate and red wine which may or may not protect from heart attacks. What is lost in many headlines is that scientific studies usually express their results as a change in risk of developing a disease, not a direct causation, and very few diseases are caused by one chemical or one food additive. On this topic, WNYC’s “On The Media”-team have an issue of their Breaking News Consumer Handbook that is very useful to evaluate health news.

The causation vs. correlation issue is perhaps a little easier to discuss than big ethical questions that involve changing the germline DNA of human beings because ethical questions do not usually have a scientific answer, let alone a right answer. This is a problem, not just for scientists, but for everyone, because innovation often moves out of the realm of established ethics, forcing us to re-evaluate it.

Both CRISPR and MRT are very powerful techniques that can alter a person’s DNA, and potentially the DNA of their children, which makes them both promising and scary. We are not ready to use CRISPR to cure all cancers yet, and “Three Parent Babies” are not designed by anyone, but unfortunately, it can be hard to look past Designer Babies, Killer Mutations and DNA Scissors, and have a constructive discussion about the real issues, which needs to happen! These technologies exist; they will improve and eventually, and inevitably, play a role in medicine. The question is, would we rather have this development happen in reasonably well-regulated environments where authorities are at least somewhat accountable to the public, or are we happy to let countries with more questionable human rights records and even more opaque power structures take the lead?

Scientists have a responsibility to make sure their work is used for the benefit of humanity, and part of that is taking the time to talk about what we do in terms that anyone can understand, and to clarify all potential implications (both positive and negative), so that there can be an informed public discussion, and hopefully a solution everyone can live with.

 

Further Reading:

CRISPR:

National Geographic

Washington Post

 

Mitochondrial Replacement Therapy:

A paper on clinical and ethical implications

New York Times (Op-Ed)

 

Is Your Deodorant Bad For Your Health?

 

By Jesica Levingston Mac Leod, PhD

Body odors (BO) are part of our evolution, and the ability to smell has evolved with us, making people fall in love or run away from a smelly person. Sweat has an initial effect to cool our body down and avoid overheating. Sweat can also be trigger by stress, anxiety or other hormonal changes. Sweat by itself doesn’t smell, but the bacteria located near the glands, for example, the armpits, breakdown the sweat generating the “BO”. How do we deal with the stinky fact? We apply deodorants and/or antiperspirants. Deodorants have ingredients like triclosan, which make the skin more salty or acidic for the bacteria to grow in those areas. Therefore deodorants don’t stop you from sweating, but antiperspirants will do the trick, as they contain ingredients like aluminum and zirconium, which are taken up through the pores and they react with water and swell, forming a gel that blocks the sweat.

Last year, Mandriota and collaborators demonstrated that in a cancer mouse model, concentrations of aluminum in the amount of those measured in the human breast are able to transform cultured mammary epithelial cells, allowing them to form tumors and to metastasize. Moreover, aluminum salts have been linked with DNA damage, oxidative stress, and estrogen action. In 2004, a woman reported aluminum poisoning after using antiperspirants for four years, and after stopping the use of these products the aluminum levels dropped and she recovered.

Breast cancer develops after cells with mutations in their DNA start growing uncontrolled, generating a tumor. Most breast cancers develop in the upper outer quadrant of the breast, near to the lymph nodes that are exposed to antiperspirants. This fact was the starting point for the theories that the underarm cosmetic products could be carcinogenic. One of the first publications on this subject dates from 2002; it was population-based (ages 20-74, 1606 patients) and found no correlation between breast cancer and antiperspirant use. A second article found a relationship between an earlier age of breast cancer diagnosis to more frequent regular use of antiperspirants/deodorants and underarm shaving.

Aluminum salts have been linked to increased risk of developing breast cancer, but so far the research on this has been quite inconsistent. Last month, a new research study of 418 women (ages 20 to 85) examined their self-reported history of use of underarm cosmetic products and health status, in order to unveil a bit more about the link between antiperspirants and breast cancer. Linhart and col. from Austria, studied the relationship of the use of underarm cosmetic products and the risk of breast cancer. They divided the group in two: half of the women were breast cancer patients and the other half healthy controls. Then, they measured the concentration of aluminum in the breast tissue of some of the women. The results showed that the risk of breast cancer increased by an odd ratio of 3.88 in females who described using the underarm products multiple times per day starting before their 30th birthday. Importantly: “aluminum traces were found in the breast tissue in both cancer patients and healthy controls and it was significantly associated to self-reported underarm cosmetic products use”. In fact, the median concentrations of aluminum were 5.8 (2.3-12.9) nmol/g in the tissues from breast cancer patients versus 3.8 (2.5-5.8) nmol/g in controls. The conclusion is that more than daily use of these cosmetic products at younger ages may lead to the accumulation of aluminum in breast tissue and increase the risk of breast cancer.

Although the American Cancer Society claims that “there are no strong epidemiologic studies in the medical literature that link breast cancer risk and antiperspirant use”, after the Linhart investigation, and knowing that 1 in 8 women will be diagnosed with breast cancer in her lifetime, I will avoid antiperspirants with aluminum. Nobody wants to be called “stinky”, so some actions to take are to wash your clothes after working out, take showers regularly and/or clean your armpits with water and soap as soon as you “smell something”, apply deodorant, and consult with your doctor about the best way to keep your body odors under control. The last resource: perfume. If you can’t win the fight… hide.

The Science of Solar Eclipses

By JoEllen McBride, PhD 

As the sky darkens on August 21st, we will stand in awe of the first total solar eclipse to cross over the contiguous U.S. in almost 40 years. This is also a chance for scientists to do what they do best– science!

 

Total Eclipse of the Sun

Every month, the Moon passes between the Earth and Sun during its New Moon phase. We can’t see the New Moon because the side that faces us isn’t illuminated by the Sun but it’s up there. Solar eclipses happen only when the Moon is in the New Moon phase and crosses the plane created by the Earth-Sun orbit. All other New Moons are either too high or too low in the orbit, to cover Sun.

 

A total solar eclipse is even more special. The cosmos has gifted us with a spectacular coincidence. The distance between the Moon and Earth is 400 times less than the distance between the Sun and Earth. This wouldn’t be interesting except for the fact the Moon is also 400 times smaller than the Sun. Once the Moon hits that sweet spot in its orbit around Earth, it completely covers the Sun.

 

That also means that sometimes a solar eclipse occurs and the Moon doesn’t completely cover the Sun. These are partial or annular eclipses and it just means that the Moon was too far from Earth to hide the Sun completely.

 

A solar eclipse occurs approximately every year and a half (give or take a few months). What makes them seem so rare is our planet is mostly ocean, so the chances of the solar eclipse passing over land with people on it is reduced. That’s why Monday’s total solar eclipse passing over the entire mainland U.S. is such a big deal! Don’t let Neil deGrasse Tyson put a damper on it!

 

Predicting Eclipses

It is true that for centuries solar eclipses were thought of as omens and bringers of terrible things by many human societies. But once we figured out that they were predictable, we quickly used them to learn about the universe. The first predicted eclipse was done by Thales of ancient Greece around 610 or 585 BCE. Thales made the prediction using the idea of deductive geometry borrowed from the Egyptians. Euclid, much later, formalized this into what is now known as Euclidean Geometry. The historical record shows that Thales’s prediction only worked one time though because there are no other accounts of anyone successfully predicting an eclipse until Ptolemy used Euclidean geometry in 150 CE.

 

So how can scientists use this periodic alignment of celestial bodies to their advantage? The Sun is a pretty reliable part of our day, so having it gone for a few moments allows us to study the reaction of animals to an abrupt change in their environment. You’ll hear birds stop singing and frogs and crickets will begin chirping as the sky darkens. Mammals will begin their bedtime rituals also. But we can learn the most about the Sun itself from a solar eclipse.

 

Image of the corona created by placing a disc over the Sun to mimic a solar eclipse. These instruments, called coronagraphs, still allow a little sunlight to get through which can mess up measurements of the corona. So scientists still rely on real deal total solar eclipses to study the corona in detail.
Image of the corona created by placing a disc over the Sun to mimic a solar eclipse. These instruments, called coronagraphs, still allow a little sunlight to get through which can mess up measurements of the corona. So scientists still rely on real deal total solar eclipses to study the corona in detail.

Grab a Corona

The Sun has an outer atmosphere extending millions of miles above its surface called the corona. At temperatures reaching a few million degrees Fahrenheit, the corona significantly hotter than the Sun’s surface. The corona was first observed in 968 CE during a solar eclipse and for many centuries, scientists debated whether this bright wispy envelope was part of the Sun or the Moon. It wasn’t recognized as being part of the Sun until the eclipse in 1724 and then verified over a century later in 1842. Then, during 1932 and 1940 solar eclipses, scientists determined that the corona is significantly hotter than the surface of the Sun. Iron atoms in the corona are stripped of their electrons, which can only happen if the atoms are heated to millions of degrees. This discovery still summons solar physicists to all parts of the planet to observe solar eclipses. This solar eclipse is no different. They’re still not sure why the corona is so hot.

 

Get You Some Flare

Solar eclipses also allow scientists to study another extremity of the Sun, solar flares. Solar flares or prominences are as spectacular as they are dangerous– especially today. They can disrupt satellites and other communications devices as well as short out electrical grids. So it is crucial that we understand as much as we can about them. The first solar prominence was observed, with the naked eye, during a partial solar eclipse in 334 CE. Knowing this probably would have helped Birger Wassenius during the total solar eclipse in 1733. He noticed solar flares but suspected they were coming from the Moon. It wasn’t until a solar eclipse in 1842 that scientists verified the ejections were coming from the Sun.

 

The Sun goes through cycles of solar flare activity about every 11 years. This year, the Sun is approaching a low point in its activity, so scientists will use this total eclipse to study how flares differ from when the Sun is more active.

 

Other Notable Discoveries Thanks to Solar Eclipses

In 1868 the element Helium was discovered in the Sun’s light during the 1868 and 1869 solar eclipses and named after the Sun (Helios = Sun in Greek). Helium wasn’t identified on Earth until 1895. Another big win for physics came during the 1919 solar eclipse. Scientists used the darkened sky to verify that the Sun is massive enough to bend the light of faraway stars before it reaches us. Stars that should have been behind the Sun– and therefore not visible during the eclipse– were clearly seen. This proved part of Einstein’s theory of relativity that massive objects bend space around them.

 

Solar eclipses are awe inspiring and also useful to science. So make sure you grab your eclipse glasses or pinhole cameras or fists and get out there!

 

What A Marshmallow Can Say About Your Brain

By Deirdre Sackett

In the 1970s, researchers at Stanford University performed a simple experiment. They offered children the chance to eat a single marshmallow right now, or wait 15 minutes to receive two marshmallows. Out of 600 children in the study, only about ⅓ were able to wait long enough for two treats. Most attempted to wait, but couldn’t make it through the whole 15 minutes. A minority of kids ate the marshmallow immediately.

 

Feeding marshmallows to children in the name of science may seem like a waste of federal funds. But it turns out that the ability to wait for a treat can actually predict a lot about someone’s personality and life trajectory.

 

Since the 70s, many scientific groups have repeated the “marshmallow test” (some of which have been hilariously documented). In some iterations, researchers recorded whether each child chose an immediate versus delayed treat, and then tracked the children’s characteristics as they grew up. Amazingly, the children’s choices predicted some important attributes later on in life. Generally, the more patient children who waited for the bigger reward would go on to score higher on the SAT, have a lower body mass index (BMI), and were more socially and cognitively competent compared to the kids who couldn’t wait and immediately ate one treat.

 

The “marshmallow test” measures a cognitive ability called delay discounting. The concept is that a big reward becomes less attractive (or “discounted”) the longer you need to wait for it. As such, delay discounting is a measure of impulsivity – how long are you willing to wait for something really good, before choosing a quicker, but less ideal, option?

 

While it’s okay to occasionally have spur-of-the-moment choices, poor delay discounting (increased impulsivity) is often a symptom of problematic gambling, ADHD, bipolar disorder, and other mental health issues. In particular, drug addiction is also accompanied by increased impulsive choices. For instance, drug users will choose immediate rewards (such as drugs of abuse) over delayed, long-term rewards (i.e., family life, socializing, or jobs). Drug users are poor at delay discounting and choose immediate options faster than non-drug users. This isn’t just a human flaw; exposing rats to cocaine also increases their impulsivity during delay discounting tasks.

 

Interestingly, aspects of the “marshmallow test” hint at this impulsivity-drug addiction link. In 2011, researchers did a follow-up study with the (now adult) children from the original 1970’s Stanford experiment. The scientists imaged the subjects’ brains while making them do a delayed gratification task in which they had to wait for a reward. They found that patient versus impulsive individuals had very different activity in two specific brain regions involved in drug addiction.

 

Firstly, the study found that impulsive individuals had greater activity in the ventral striatum, a brain region heavily linked to drug addiction and impulsivity. The greater activity in this region may imply that impulsive individuals process information about rewards differently than patient individuals. That is, the way their brain is wired may cause them to want their rewards right now.

 

Secondly, the impulsive individuals had less activity in the prefrontal cortex, which is responsible for “putting on the brakes” for impulsive actions. This finding suggests that impulsive individuals may not have that neural “supervisor” that can stop themselves from acting on their impulses. Drug addicts show similarly reduced prefrontal activity. So in addition to doing worse on standardized tests, having higher BMIs, or being less socially competent, the marshmallow test predicts that impulsive individuals may have brain activity similar to those of drug users.

 

While it seems like a silly experiment, the marshmallow test is a great starting point to help increase our understanding of impulsivity. Using this information, researchers can start to develop treatments for impulsive behavior that negatively affects people’s lives. Specifically, treating impulsivity in drug addicts could help as part of the rehabilitation process. So think about that the next time you reach for that sweet treat!

 

One ring to rule them all: The cohesin complex

By Johannes Buheitel, PhD

In my blog post about mitosis (http://www.myscizzle.com/phases-of-mitosis/), I explained some of the challenges a human cell faces when it tries to disentangle its previously replicated chromosomes (for an overview of the cell cycle, see also http://www.myscizzle.com/cell-cycle-introduction/) and segregate them in a highly ordered fashion into the newly forming daughter cells. I also mentioned a protein complex, which is integral for this chromosomal ballet, the cohesin complex. To recap, cohesin is a multimeric ring complex, which holds the two chromatids of a chromosome together from the time the second sister chromatid is generated in S phase until their separation in M phase. This decreases complexity, and thereby increases the fidelity of chromosome segregation, and thus, mitosis/cell division. And while this feat should already be enough to warrant devoting a whole blog post to cohesin, you will shortly realize that this complex also performs a myriad of other functions during the cell cycle, which really makes it “one ring to rule them all”.

Figure 1: The cohesin complex. The core complex consists of three subunits: Scc1/Rad21, Smc1, and Smc3. They interact to form a ring structure, which embraces ("coheses") sister chromatids.
Figure 1: The cohesin complex. The core complex consists of three subunits: Scc1/Rad21, Smc1, and Smc3. They interact to form a ring structure, which embraces (“coheses”) sister chromatids.

But let’s back up a little first. Cohesin’s integral ring structure is composed of three proteins: Smc1, Smc3 (Structural maintenance of chromosomes), and Scc1/Rad21 (Sister chromatid cohesin/radiation sensitive). These three proteins attach to each other in a more or less end-to-end manner, thereby forming a circular structure (see Figure 1; ONLY for the nerds: Smc1 and -3 form from long intramolecular coiled-coils by folding back onto themselves, bringing together their N- and C-termini at the same end. This means that these two proteins actually interact with their middle parts, forming the so-called “hinge”, as opposed to really “end-to-end”). Cohesin obviously gets its name from the fact that it causes “cohesion” between sister chromatids, which has been first described 20 years ago in budding yeast. The theory that the protein complex does so by embracing DNA inside the ring’s lumen was properly formulated in 2002 by the Nasmyth group, and much evidence supporting this “ring embrace model” has been brought forth over last decades, making it widely (but not absolutely) accepted in the field. According to our current understanding, cohesin is already loaded onto DNA (along the entire length of the decondensed one-chromatid chromosome) in telophase, i.e. only minutes after chromosome segregation, by opening/closing its Smc1-Smc3 interaction site (or “entry gate”). When the second sister chromatid is synthesized in S phase, cohesin establishes sister chromatid cohesion in a co-replicative manner (only after you have the second sister chromatid, you can actually start talking about “cohesion”). Early in the following mitosis, in prophase to be exact, the bulk of cohesin is removed from chromosome arms in a non-proteolytic manner by opening up the Smc3-Scc1/Rad21 interface (or “exit gate”; this mechanism is also called “prophase pathway”). However, a small but very important fraction of cohesin molecules, which is located at the chromosomes’ centromere regions, remains protected from this removal mechanism in prophase. This not only ensures that sister chromatids remain cohesed until the metaphase-to-anaphase transition, but also provides us with the stereotypical image of an X-shaped chromosome. The last stage in the life of a cohesin ring is its removal from centromeres, a tightly regulated process, which involves proteolytic cleavage of cohesin’s Scc1/Rad21 subunit (see Figure 2).

Figure 2: The cohesin cycle. Cohesin is topologically loaded onto DNA in telophase by opening up the Smc1-Smc3 interphase ("entry gate"). Sister chromatid cohesion is established during S phase, coinciding with the synthesis of the second sister. In prophase of early mitosis, the bulk of cohesin molecules are removed from chromosome arms (also called "prophase pathway") by opening up the interphase between Scc1/Rad21 and Smc3 ("exit gate"). Centromeric cohesin is ultimately proteolytically removed at the metaphase-to-anaphase transition.
Figure 2: The cohesin cycle. Cohesin is topologically loaded onto DNA in telophase by opening up the Smc1-Smc3 interphase (“entry gate”). Sister chromatid cohesion is established during S phase, coinciding with the synthesis of the second sister. In prophase of early mitosis, the bulk of cohesin molecules are removed from chromosome arms (also called “prophase pathway”) by opening up the interphase between Scc1/Rad21 and Smc3 (“exit gate”). Centromeric cohesin is ultimately proteolytically removed at the metaphase-to-anaphase transition.

As you can see, during the 24 hours of a typical mammalian cell cycle, cohesin is pretty much always directly associated with the entire genome (the exceptions being chromosomes arms during most of mitosis, i.e. 20-40 minutes and entire chromatids during anaphase, i.e. ~10 minutes). This means that cohesin has at least the potential to influence a whole bunch of other chromosomal events, like DNA replication, gene expression and DNA topology. And you know what? Turns out it does!

Soon after cohesin was described as this guardian of sister chromatid cohesion, it also became clear that there is just more to it. Take DNA replication for example. There is good evidence that initial cohesin loading is already topological (meaning, the ring closes around the single chromatid). That poses an obvious problem during S phase: While DNA replication machineries (“replisomes”) zip along the chromosomes trying to faithfully duplicate the entire genome in a matter of just a couple of hours, they encounter – on average – multiple cohesin rings that are already wrapped around DNA. Simultaneously, cohesin’s job is to take those newly generated sister chromatids and hold them tightly to the old one. Currently, we don’t really know how this works, whether the replisome can pass through closed cohesin rings, or whether cohesin gets knocked off and reloaded after synthesis. What we do know, however, is that cohesion establishment and DNA replication are strongly interdependent, with defects in cohesion metabolism causing replication phenotypes and vice versa.

Cohesin has also been shown to have functions in transcriptional regulation. It was observed quite early that cohesin can act as an insulation factor, blocking long-range promoter-enhancer association. Today we have good evidence showing that cohesin binds to chromosomal insulator elements that are usually associated with the CTCF (CCCTC-binding factor) transcriptional regulator. Here, the ring complex is thought to help CTCF’s agenda by creating internal loops, i.e. inside the same sister chromatid!

Studying cohesin has, of course, not only academic value. Because of its pleiotropic functions, defects in human cohesin biology can cause a number of clinically relevant issues. Since actual cohesion defects will cause mitotic failure (which most surely results in cell death), most of cohesin-associated diseases are believed to be caused by misregulation of the complex’s non-canonical functions in replication/transcription. These so-called cohesinopathies (e.g. Roberts syndrome and Cornelia de Lange syndrome) are congenital birth defects with widely ranging symptoms, which usually include craniofacial/upper limb deformities as well as mental retardation.

It is important to mention that cohesin also has a very unique role in meiosis where it not only coheses sister chromatids but also chromosomal homologs (the two maternal/paternal versions of a chromosome, each consisting of two sisters, which themselves are cohesed). As a reminder, the lifetime supply of all oocytes of a human female is produced before puberty. These oocytes are arrested in prophase I (prophase of the first meiotic division) with fully cohesed homologs and sisters, and resume meiosis one by one each menstrual cycle. This means that some oocytes might need to keep up their cohesion (between sisters AND homologs) over decades, which, considering the half-life of your average protein, can be challenging. This has important medical relevance as cohesion failure is believed to be the main cause behind missegregation of homologs, and thus, age-related aneuploidies, like e.g. trisomy 21.

After twenty years of research, the cohesin complex still manages to surprise us regularly, as new functions in new areas of cell cycle regulation come to light. Currently, extensive research is conducted to better understand the role of certain cohesin mutations in cancers such as glioblastoma, or Ewing’s sarcoma. And while we’re still far away from completely understanding this complex complex, we already know enough to say that cohesin really is “one ring to rule them all”.

 

The WTF Star: Alien Mega Structure or Mega Version of Jupiter System?

 

JoEllen McBride, PhD

 

The Kepler telescope, despite technical issues, has observed over 100,000 stars in our galaxy. Its database is full of stars that show the tell-tale sign of an orbiting planet– a periodic and repeatable dimming of the starlight. But one stellar dimming sequence doesn’t follow the expected protocol and it has astronomers getting creative to explain why.

 

Flux Lost

Tabby’s star, or more fondly, the WTF (Where’s the Flux?) star, is a yellow star slightly larger than our Sun located over 1200 light-years away in the constellation Cygnus the Swan. You can’t see it with your eyes but looking through a small 5-inch telescope you can see it just fine.

 

Kepler continuously observed the region of space where WTF lives from 2009 to 2013. Then in 2015, Citizen scientists analyzing the data noticed something very peculiar about WTF’s brightness. In March of 2011, the star dimmed by 22% of its original brightness, suggesting something big was passing in front of it. Then 700 days later in 2013, the star dimmed significantly again, but this time did so irregularly– suggesting that not just one but many large objects were passing in front of the star. This is where the science gets interesting.

 

By JohnPassos (Own work) CC BY-SA 4.0, via Wikimedia Commons Light curve for Tabby’s star.
By JohnPassos (Own work) CC BY-SA 4.0, via Wikimedia Commons
Light curve for Tabby’s star.

 

When astronomers study the light from stars we create graphs that are called light curves. Light curves describe how the brightness of a star changes over a period of time. We choose a star, take images of it periodically and measure how bright it is. If the star’s brightness decreases, we will record a lower brightness value than in previous measurements.

 

Usually, when a star has planets orbiting it, the dimming will be periodic– tied to the orbit of the planet. So we will measure a smooth dip in the brightness of the star at regular intervals as the planet passes in front. What’s so spectacular about WTF’s brightness is that there is a single, smooth dip in brightness followed 700 days later by  irregular but large decreases that lasted for 100 days before the brightness returned back to normal levels.

 

After ruling out issues with the Kepler telescope and the variability of WTF, the lead scientists considered more celestial explanations for the irregular dimming. Debris from a violent collision like the one that formed our Moon would probably create enough large particles to recreate the dimming– but the likelihood of us catching such a one-off event is extremely small. A large conglomerate of comet fragments also seemed like a reasonable and likely cause. But we’ve never observed this before so can only make educated guesses as to what that light curve would look like.

 

Other scientists have jumped in on the task of explaining these dips with suggestions ranging from weird internal variations with the WTF star itself to unfinished alien megastructures. But recently, a group of researchers has proposed an explanation that’s a little more familiar and easily testable.

 

Follow the Gravity Train

To understand their proposal, we need to discuss a little-known fact (at least, I didn’t know this) about our solar system’s largest planet, Jupiter. All massive bodies in our solar system exert a gravitational force on other massive bodies. If we think of space as a bed sheet held taut at its corners and place a bowling ball at the center, the ball would create a pit or well in the sheet due to the mass of the ball. If we then place a baseball somewhere else on the sheet, the sheet will also bend due to the mass of the baseball. The larger well in the sheet due to the bowling ball will overlap in some places with the well in the sheet due to the baseball. This is sort of how gravitational forces interact with each other.

 

But space is a bit more complicated. The interaction of the gravitational forces of two massive bodies ends up creating what are known as Lagrange points. In our sheet analogy, these would appear as five additional wells created at specific locations around the bowling ball-baseball system. In space, these points orbit the more massive body at the same speed as the smaller body. Any objects living at these points are stuck following the smaller body around the larger one, never catching up or falling behind.

 

In the case of the Sun-Jupiter system, there are three Lagrange points that lie along Jupiter’s orbit and are home to thousands of asteroids. The two large ”Trojan” swarms are located on either side of Jupiter in its orbit around the Sun and the smaller “Hilda” swarm is always located on the opposite side of the Sun from Jupiter.

 

There is evidence for Trojan-type regions in other exoplanet systems and planet formation theory shows that these regions can exist long after planets form in solar systems. So this makes their detection more probable than one-off events like planetary collisions or never observed events like swarms of comet fragments.

 

Computer, Enhance

Researchers in Spain took a known idea and made it bigger to explain the weird dimming of the WTF star. Their proposal suggests the first, smoother dimming event is due to a large, ringed planet– almost five time larger than Jupiter. This large planet would also have larger Trojan swarms which would explain the irregular dips in brightness 700 days later. Since the Jovian system has two Trojan regions, the astronomers expect there to be another irregular dimming episode again in February 2021 which would correspond to the second Trojan region. Then two years later in 2023, the giant ringed planet should pass in front of the star again, starting the approximately 12 year cycle over.

 

Their hypothesis even accounts for a smaller May 2017 dimming event which occurred at the same time their theoretical planet would have been passing behind the WTF star. If this system is similar to Jupiter, the dimming could be explained by a Hilda-like swarm of asteroids which would dim the star but not as significantly as the Trojan swarm.

 

You should still hold some reservations about this prediction though. The number of asteroids needed to produce such a large dimming is huge– like the same mass as Jupiter huge. No one has a clue if this sort of configuration would even be stable. The team is working on a computer model for the system and plans on releasing those results in a forthcoming paper. But the key to a successful hypothesis is that it is easily testable and the Trojan hypothesis gives us something to look forward to in 2021. We only have to wait 4 years to see if these researchers are right or if we need to go back to the drawing board to figure out what’s going on with the WTF star.

Halos on Mars

By JoEllen McBride, PhD

Curiosity Discovery Suggests Early Mars Environment Suitable for Life Longer Than Previously Thought.

 

We have been searching desperately for evidence of life on Mars since the first Viking lander touched down in 1976. So far we’ve come up empty-handed but a recent finding from the Curiosity rover has refueled scientists’ hopes.

 

NASA’s Curiosity rover is currently puttering along the Martian surface in Gale Crater. Its mission is to determine whether Mars ever had an environment suitable for life. The clays and by-products of reactions between water and sulfuric acid (a.k.a. sulfates) that fill the crater are evidence that it once held a lake that dried up early in the planet’s history. Using its suite of instruments, Curiosity is digging, sifting and burning the soil for clues to whether the wet environment of a young Mars could ever give rise to life.

 

On Tuesday, scientists announced that they discovered evidence that groundwater existed in Gale Crater long after the lake dried up. Curiosity noticed lighter colored rock surrounding fractures in the crater which scientists recognized as a tell-tale sign of groundwater. As water flows underground on Earth, oxygen atoms from the water combine with other minerals found in the rock. The newly-formed molecules are then transported by the flowing water and absorbed by the surrounding rock. This process creates ‘halos’ within the rock that often have different coloration and composition than the original rock.

 

Curiosity used its laser instrument to analyze the composition of the lighter colored rock in Gale Crater and reported that it was full of silicates. This particular region of the crater contains rock that was not present at the same time as the lake and does not contain the minerals necessary to produce silicates. So the only way these silicates could be present is if they were transported there from older rock. Using what they know about groundwater processes on Earth, NASA scientists determined that groundwater must have reacted with silicon present in older rock creating the silicates. These new minerals then flowed to the younger bedrock and seeped in resulting in the halos Curiosity discovered. The time it would take these halos to form provide strong evidence that groundwater persisted in Gale Crater much longer than previously thought.

 

Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.
Credit: NASA/JPL-Caltech Image from Curiosity of the lighter colored halos surrounding fractures in Gale Crater.

This news also comes on the heels of the first discovery of boron by Curiosity on Mars. Boron on Earth is present in dried-up, non-acidic water beds. Finding boron on Mars suggests that the groundwater present in Gale Crater was most likely at a temperature and acidity suitable for microbial life. The combination of the longevity of groundwater and its acceptable acidity greatly increases the window for microbial life to form on young Mars.

 

These two discoveries have not only extended the time-frame for the habitability of early Mars but lead one to wonder where else groundwater was present on the planet. We hopefully won’t have to wait too long to find out. Curiosity is still going strong and NASA has already begun work on a new set of exploratory Martian robots. The next rover mission to Mars is set to launch in 2020 and will be equipped with a drill that will remove core samples of Martian soil. The samples will be stored on the planet for retrieval at a later date. What (or who) will be sent to pick up the samples is still being determined.

 

Although we haven’t found evidence for life on Mars, the hope remains. It appears Mars had the potential for life at the same time in its formation as Earth. We just have to continue looking for organic signatures in the Martian soil or determine what kept life from getting its start on the Red Planet.

 

HeLa, the VIP of cell lines

By  Gesa Junge, PhD

A month ago, The Immortal Life of Henrietta Lacks was released on HBO, an adaptation of Rebecca Skloot’s 2010 book of the same title. The book, and the movie, tell the story of Henrietta Lacks, the woman behind the first cell line ever generated, the famous HeLa cell line. From a biologist’s standpoint, this is a really unique thing, as we don’t usually know who is behind the cell lines we grow in the lab. Which, incidentally, is at the centre of the controversy around HeLa cells. HeLa was the first cell line ever made over 60 years ago and today a PubMed search for “HeLa” return 93274 search results.

Cell lines are an integral part to research in many fields, and these days there are probably thousands of cell lines. Usually, they are generated from patient samples which are immortalised and then can be grown in dishes, put under the microscope, frozen down, thawed and revived, have their DNA sequenced, their protein levels measured, be genetically modified, treated with drugs, and generally make biomedical research possible. As a general rule, work with cancer cell lines is an easy and cheap way to investigate biological concepts, test drugs and validate methods, mainly because cell lines are cheap compared to animal research, readily available, easy to grow, and there are few concerns around ethics and informed consent. This is because although they originate from patients, the cell lines are not considered living beings in the sense that they have feelings and lives and rights; they are for the most part considered research tools. This is an easy argument to make, as almost all cell lines are immortalised and therefore different from the original tissues patients donated, and most importantly they are anonymous, so that any data generated cannot be related back to the person.

But this is exactly what did not happen with HeLa cells. Henrietta Lack’s cells were taken without her knowledge nor consent after she was treated for cervical cancer at Johns Hopkins in 1951. At this point, nobody had managed to grow cells outside the human body, so when Henrietta Lack’s cells started to divide and grow, the researchers were excited, and yet nobody ever told her, or her family. Henrietta Lacks died of her cancer later that year, but her cells survived. For more on this, there is a great Radiolab episode that features interviews with the scientists, as well as Rebecca Skloot and Henrietta Lack’s youngest daughter Deborah Lacks Pullum.

In the 1970s, some researchers did reach out to the Lacks family, not because of ethical concerns or gratitude, but to request blood samples. This naturally led to confusion amongst family members around how Henrietta Lack’s cells could be alive, and be used in labs everywhere, even go to space, while Henrietta herself had been dead for twenty years. Nobody had told them, let alone explained the concept of cell lines to them.

The lack of consent and information are one side, but in addition to being an invaluable research tool, cell lines are also big business: The global market for cell lines development (which includes cell lines and the media they grow in, and other reagents) is worth around 3 billion dollars, and it’s growing fast. There are companies that specialise in making cell lines of certain genotypes that are sold for hundreds of dollars, and different cell types need different growth media and additives in order to grow. This adds a dimension of financial interest, and whether the family should share in the profit derived from research involving HeLa cells.

We have a lot to be grateful for to HeLa cells, and not just biomedical advances. The history of HeLa brought up a plethora of ethical issues around privacy, information, communication and consent that arguably were overdue for discussion. Innovation usually outruns ethics, but while nowadays informed consent is standard for all research involving humans, and patient data is anonymised (or at least pseudonomised and kept confidential), there were no such rules in 1951. There was also apparently no attempt to explain scientific concept and research to non-scientists.

And clearly we still have not fully grasped the issues at hand, as in 2013 researchers sequenced the HeLa cell genome – and published it. Again, without the family’s consent. The main argument in defence of publishing the HeLa genome was that the cell line was too different from the original cells to provide any information on Henrietta Lack’s living relatives. There may some truth in that; cell lines change a lot over time, but even after all these years there will still be information about Henrietta Lack’s and her family in there, and genetic information is still personal and should be kept private.

HeLa cells have gotten around to research labs around the world and even gone to space and on deep sea dives. And they are now even contaminating other cell lines (which could perhaps be interpreted as just karma). Sadly, the spotlight on Henrietta Lack’s life has sparked arguments amongst the family members around the use and distribution of profits and benefits from the book and movie, and the portrayal of Henrietta Lack’s in the story. Johns Hopkins say they have no rights to the cell line, and have not profited from them, and they have established symposiums, scholarships and awards in Henrietta Lack’s honour.

The NIH has established the HeLa Genome Data Access Working Group, which includes members of Henrietta Lack’s family. Any researcher wanting to use the HeLa cell genome in their research has to request the data from this committee, and explain their research plans, and any potential commercialisation. The data may only be used in biomedical research, not ancestry research, and no researcher is allowed to contact the Lacks family directly.