Cassini’s Sacrifice

 

By  JoEllen McBride, PhD

Our solar system is full of potential. From Earth to the frozen surface of Pluto, hydrocarbons and other complex organic molecules are surprisingly common. With every new space mission, we find the ingredients of life on more of our celestial neighbors.

 

The newest location to add to our list of places with potential for life comes from NASA’s Cassini spacecraft which began its study of Saturn in 2004. In the 13 years that Cassini has studied Saturn and its moons, it solved many mysteries and discovered some startling similarities to our own planet.

 

Saturn, at first glance, seems nothing like Earth. It is a gas giant, full of hydrogen and helium, with a possible Earth-sized core at the center. But Cassini revealed that there are phenomena occurring in the gas giant’s atmosphere that also occur on Earth. Cassini recorded video of lightning strikes on Saturn— the first taken on a planet other than our own. Since Saturn doesn’t have interference from mountains and other land features, jet streams can flow unimpeded forming a continuous hexagonal shape at the poles. But scientists are still unsure why that specific shape is created. Saturn also develops a planet-wide storm every 30 years that just happened to show up while Cassini was around in 2011– 10 years early. From the data collected by Cassini, scientists were able to determine that the storms form in a similar way to thunderstorms on Earth. Instead of adjacent hot and cold fronts mixing on Saturn, layers of warm water vapor and cool hydrogen gasses mix. The storms take time to develop because water vapor is much heavier than hydrogen so it is normally positioned below the hydrogen fog. This gives the elevated hydrogen gas time to cool. Once it cools down enough, it becomes more dense which causes it to sink into the warmer water vapor. The two mix and voila!, a Saturnian thunderstorm is born. The storm also kicked up hydrocarbons from the lower atmosphere which surprised scientists.

 

Although Saturn probably can’t harbor life, two of Saturn’s moons, Titan and Enceladus, are ripe with the ingredients. The Cassini spacecraft made numerous orbits around Titan and even sent a probe (Huygens) down to the surface. Titan has land features similar to Earth, with lakes, mountains, ice caps, and deserts. The difference is methane and ethane are the chemical building blocks of the complex molecules found on the moon instead of carbon.

 

Enceladus was the biggest surprise to come out of the Cassini mission. This moon is essentially a smaller version of Jupiter’s moon Europa. Both are covered in a liquid ocean topped with a thick layer of ice that surrounds the moon. There is one big difference: Enceladus has hydrothermal vents deep within its oceans, just like on Earth, and these vents violently force liquid through cracks in the ice. The plumes are huge and powerful, extending hundreds of miles into space and traveling at hundreds of miles an hour. The Cassini spacecraft revealed that these plumes are chock full of hydrocarbons, which are the building blocks necessary for life. This tells scientists that there is the potential for life in the oceans of Enceladus and possibly Europa.

 

The other moons that Cassini visited revealed some startling information. Tethys has bright arcs of light which can only be seen at infrared wavelengths. Scientists are puzzled as to what they are and what is causing them. The spongy-looking moon Hyperion builds up a static charge as it tumbles around Saturn. Mimas, aka the Death Star Moon, was thought to be a dead world but shows evidence of a liquid ocean underneath its cratered surface. The moon is the same size as Enceladus but has no visible jets or plumes, so the liquid is trapped beneath the surface. Why these two moons are so different and whether Mimas’ ocean is full of hydrocarbons is something scientists hope to study in the future.

 

The potential of life in the Saturnian system is the main reason Cassini’s mission will come to a destructive end. The spacecraft is running out of fuel, meaning that scientists on Earth will eventually lose the ability to control the spacecraft. Our own planet is surrounded by defunct satellites whizzing around our planet– just waiting to crash into other orbiting objects. The scientists in charge of the mission worry that if Cassini were left to orbit Saturn, it could potentially crash into Enceladus. This could introduce foreign microbes and chemicals, devastating any microbial life on the moon or ruining the chances of it ever forming. Instead, Cassini is performing its last dance with Saturn, orbiting the planet so closely that it is between the rings and the gaseous atmosphere. After 22 orbits, the spacecraft will take a dive into Saturn’s clouds on September 15, 2017, sacrificing its own metallic body for the sake of billions of potential life forms on the moons of Saturn.

 

Once Thought Elusive, A Black Hole Will Get A Close-up

 

By JoEllen McBride, PhD

Light can’t escape it but Matthew McConaughey can use it to ‘solve gravity’. They’re the most massive things in our universe but we can’t actually see them. Black holes were theorized by Einstein in the early 1900s and have intrigued both scientists and the public for over a century. Up until recently, we could only see their effects on visible matter that gets too close but an Earth-sized telescope is about to change all that.

 

The term black hole sounds silly but it’s pretty descriptive of this invisible phenomenon. Astronomers call things black or dark because we can’t actually see them with current technology. Black holes form when a star is so massive that its own force of gravity pushes in harder than the molecules and atoms that make it up can push out. The star collapses; decreasing its size to almost nothing. But matter can’t just disappear so this incredibly small object still has mass which can exert a gravitational influence on stars or gases that get too close. If our Sun became a black hole out of nowhere (don’t worry, this can’t happen), the Earth and other planets would not notice a difference gravitationally. We’d all continue orbiting as before; things would just get a lot colder. I guess that’s one way to wash away the rain.

 

So that’s how a single black hole forms but you’ve probably heard references to ‘supermassive’ black holes before. These black holes have masses of many millions or billions of suns. So what died and made that massive of a ‘hole’? Supermassive black holes are not the product of a single object but are most likely formed by the merging together of many smaller black holes. We recently found evidence of this process from the ground based gravitational wave detector, LIGO, which can detect the waves that are produced when two smaller black holes merge. We also know supermassive black holes exist because we have seen their influence on other luminous objects such as stars and gas that’s been heated. We see jets of gas being shot out of the centers of galaxies at close to light speed. There is something incredibly massive at the center of our own galaxy that causes stars nearby to orbit at incredible speeds. The simplest explanation for these observations is that galaxies have supermassive black holes at their centers.

 

But there is another way we could ‘see’ a black hole which was impossible before this year. As stated before, light cannot escape a black hole but anything that becomes trapped in the gravitational well has to orbit for some time before it disappears. So there must be a point where we can still see material just before it’s lost forever; like an object that swirls around the edge of a whirlpool just before falling down the drain. This region is known as the event horizon and it’s basically the closest we can get to seeing a black hole. Currently, the supermassive black hole at the center of our galaxy, named Sagittarius A*, isn’t taking any material in but that doesn’t mean the event horizon is empty. Luminous material can orbit in the event horizon for a very long time, we just need to look at the right wavelength with a big enough camera.

 

The center of our galaxy is 8 kiloparsecs or 1,50,000,000,000,000,000 miles away. To put that in perspective, that’s about 1014 times larger than the distance between the U.S. coasts, 1011 times larger than the Earth-Moon distance and 100,000 times larger than the distance to the next closest star, Alpha Centauri. It’s really far away. The width of Sagittarius A*’s event horizon is estimated to be between the width of Pluto and Mercury’s orbit around our Sun. At its widest estimate, the event horizon of Sagittarius A* would span one-millionth of a degree on the sky. For comparison, the full moon spans about half a degree. So we’re gonna need a bigger telescope– an Earth-sized one.

 

Enter the Event Horizon Telescope (EHT). This network of telescopes operates at radio wavelengths and uses a technique that increases the size of a telescope without having to build a huge dish. The EHT combines telescopes in Arizona, Hawaii, Mexico, Chile, Spain and the South Pole to create an Earth-sized radio dish. A good analogy I’ve found is to picture you and five other friends are standing at various locations at the edge of a pond. You all know where you are located with respect to each other and the pond surface. Each of you also has a stopwatch and placed a bobber in the water directly in front of you. If a pebble gets dropped somewhere in the middle of the pond each of you will wait until you see the bobber start moving and begin recording the time and the up and down motions the bobber makes as the peaks and troughs of the wave passes by. After you’ve recorded enough bobs, you can meet back up with your friends to determine where the pebble was dropped and its size based on the ripples and when they reached each of your respective locations. The EHT will work similarly except the friends are telescopes pointed at Sagittarius A* and the water ripples are light waves.

 

Over 10 days at the beginning of April, these telescopes were in constant contact, monitoring the weather at each site, to coordinate their observations as best they could. Radio waves can usually penetrate everything but the wavelengths that these telescopes were looking at are blocked by water vapor, so clouds and rain mean no observing. On April 15th, they finished their run by successfully obtaining 5 days worth of observations. Now each site has to mail hard drives with their data to a central location, where the images can be properly aligned. The South Pole Telescope can only send out packages after their winter season ends in October, but data is already coming in from the other sites.

 

If everything went as planned, the images should add up to the highest resolution images ever taken of a black hole. This arrangement allows them to measure objects as small as a billionth of a degree. The estimated size of Sagittarius A*’s event horizon is larger than this, so a faint ring surrounding darkness should be visible in the final images. Hopefully, Sagittarius A* was ready for its close-up because humans are eager to see how their own depictions of black holes match up.

 

 

Want to Watch History Burn? Check Out a Meteor Shower!

 

By JoEllen McBride, PhD

 

Fireballs streaking across the sky. Falling or shooting stars catching your eye. Meteors have fascinated humans as long as we’ve kept records. Depending on the time of year, on a clear night, you can see anywhere from 2 to 16 meteors enter our atmosphere and burn up right before your eyes. If you really want a performance, you should look up during one of the many meteor showers that happen throughout the year. These shows can provide anywhere from 10 to 100 meteors an hour! But what exactly is burning up to create these cosmic showers?

 

To answer this question we need to go back in time to the formation of our solar system. Our galaxy is full of dust particles and gas. If these tiny particles get close enough they’ll be gravitationally attracted and forced to hang out together. The bigger a blob of gas and dust gets, the more gas and dust it can attract from its surroundings. As more and more particles start occupying the same space, they collide with each other causing the blob to heat up. At a high enough temperature the ball of now hot gas can fuse Hydrogen and other elements which sustains the burning orb. Our Sun formed just like this, about 5 billion years ago.

 

Any remaining gas and dust orbiting our newly created Sun coalesced into the eight planets and numerous dwarf planets and asteroids we know of today. Even though the major planets have done a pretty good job clearing out their orbits of large debris, many tiny particles and clumps of pristine dust remain and slowly orbit closer and closer to the Sun. If these 4.5 billion year old relics cross Earth’s path, our planet smashes into them and they burn up in our atmosphere. These account for many of the meteors that whiz through our atmosphere unexpectedly.

 

The predictable meteor showers, on the other hand, are a product of the gravitational influence of the larger gas giant planets. These behemoths forced many of the smaller bodies that dared to cross them out into the furthest reaches of our solar system. Instead of being kicked out of the solar system completely, a few are still gravitationally bound to the Sun in orbits that take them from out beyond the Kuiper belt to the realm of the inner planets. As these periodic visitors approach our central star, their surfaces warm, melting ice that held together clumps of ancient dust. The closer the body gets to the Sun, the more ice melts– leaving behind a trail of particulates. We humans see the destruction of these icy balls as beautiful comets that grace our night skies periodically. But the trail of dust remains long after the comet heads back to edge of our solar system.

 

The dusty remains of our cometary visitors slowly orbit the Sun along the comet’s path. There are a few well-known dust lanes that our planet plows into annually. Some of these showers produce exciting downpours with over a hundred meteors an hour and others barely produce a drip. April begins the meteor shower season and the major events for 2017 are listed below.

Shower Dates

Peak Times

(UT)

Moon Phase At Peak Progenitor
Range Peak
Lyrid (N) Apr 16-25 Aprl 22 12:00 Crescent Thatcher 1861 I
Eta Aquarid (S) Apr 19-May 28 May 6 2:00 Gibbous 1P/Halley
Delta Aquarid (S) Jul 21-Aug 23 Jul 30 6:00 First Quarter 96P/Machholz
Perseid (N) Jul 17-Aug 24 Aug 12/13 14:00/2:30 Third Quarter 109P/Swift-Tuttle
Orionid Oct 2-Nov 7 Oct 21 6:00 First Quarter 1P/Halley
Taurids Sep 7-Nov 19

Nov 10/11

Nov 4/5

12:00

Crescent

Full

2P/Encke
Leonid Nov Nov 17 17:00 New 55P/Tempel-Tuttle
Geminid Dec 4-16 Dec 14 6:30 Crescent 3200 Phaethon*
Quadrantid (N) Dec 26-Jan 10 Jan 3 14:00 Full 2003 EH1

S= best viewed from Southern Hemisphere locations

N= best viewed from Northern Hemisphere locations

*This is an asteroid with a weird orbit that takes it very close to the Sun!

 

Here is a list of things you can do to ensure the best meteor viewing experience.

[unordered_list style=”star”]

  • Check the weather. If it’s going to be completely overcast your meteor shower is ruined.
  • Is the Moon up? Is it more than a crescent? If the answer to both of these is yes you will have a more difficult time seeing meteors. The big, bright ones will still shine through but those are rare.
  • When trying to catch a meteor shower, make sure the constellation the shower will radiate from is actually up that night. Hint: Meteor showers are named after the constellation they appear to radiate from.
  • You need the darkest skies possible. So get away from cities and towns. The International Dark Sky Association has a dark sky place finder you can use. Your best bet is to find an empty field far from man-made light pollution.
  • Make sure trees and buildings aren’t obscuring your view.
  • It takes about 30 minutes for your eyes to completely adjust to the darkness. If you have a flashlight, cover it with red photography gel to help keep your eyes adjusted.
  • Ditch the cell phone. Cell phones ruin your night vision. Every time you look at your screen your eyes have to readapt to the dark when you look back up at the sky. There are apps you can download that dim your screen (iPhone, Android) but your eyes will still need time to adjust to the darkness if you glance at your phone. Also looking away almost guarantees the biggest meteor will streak by at just that moment.
  • Dress comfortably. In the fall and winter, wear warm clothes and have hot chocolate and coffee on hand. In the spring and summer, some cool beverages will enhance your experience. Make sure you have blankets to lay on or comfortable chairs so you can keep your eyes on the skies.

[/unordered_list]

Follow these guidelines and you’ll have the best chance of watching 4.5 billion years of history burn up before your very eyes.

Paperfuges and Foldscopes: The Case for Low-Tech Science

 

By Gesa Junge, PhD

 

If you have ever been inside a lab you will know that centrifuges and microscopes come in various shapes and sizes and degrees of sophistication, but in some form they are used every day in most research labs around the world. Microscopes and centrifuges are pretty basic lab equipment, although some versions can be very high-end, for example high-speed centrifuges that can cool down to fridge temperatures, or electron microscopes that can magnify structures up to 2 million times. But even basic centrifuges and microscopes cost a few thousand dollars, and they require electricity and maintenance. These are not big issues for most universities and established research institutes, but for scientists working in the field, or in developing countries, money and electricity can be hard to come by.

With this in mind, Manu Prakash from Stanford University developed a centrifuge and a microscope made of paper. Yes, you read that right. The centrifuge is basically a paper disk on two strings that you pull to make the disk spin (kind of like a whirligig Saw Mill, remember those?) – check out this video from Wired Magazine. The whole thing costs 20 cents and fits into a jacket pocket, but it can spin samples up to 12500rpm, which is fast. Fast enough, for example, to separate blood into blood cells and plasma, which is a key step in many diagnostic procedures.

And the foldscope is basically origami. It is printed on paper, you cut out the parts and fold them up and insert a lens. The microscope does need electricity, but it can run on a battery for up to 50 hours, and the sample can be mounted on a piece of tape, as opposed to a glass slide. The lens determines the magnification, and they can go up to 2000x. For reference, we can distinguish individual human cells easily at 10x, nuclei become clearly visible at 20x and bacteria at 40x. Using different color LEDs, this can even be converted into a fluorescent microscope, meaning it can be used to analyse different stains of tissues.

The paperfuge and the foldscope are the implementation of an emerging concept called “frugal science”, and aim to bring scientific advances to inaccessible and under-developed regions. And while Manu Prakash’s ideas are very low-tech approaches, the idea of making science useful to everyone also benefits from innovation and advanced technology. For example, Dr Samuel Sia at Columbia University has developed a smart phone dongle technology called mChip which can diagnose HIV from a finger prick’s worth of blood. This device contains all the necessary reagents which mix at the push of a button, and it plugs into the headphone jack of a phone as a power supply. Testing takes about 15 minutes and costs about $1 (the dongle is $100), which is a huge improvement over current methods. In a similar concept, a company called QuantumMDx in Newcastle in the UK is developing a handheld DNA testing tool, which could be used to identify strains of pathogens. And electronics company Phillips has come up with the MiniCare I-20, a handheld device that can measure troponin I levels from a single drop of blood taken from a pinprick. Troponin I is a marker of a damaged heart muscle, and is often measured in emergency departments.

All of these innovations address a really important, and sometimes overlooked, point: science and technology, in all their greatness and cool fascination, will only benefit humanity if applied in the community in a way that leads to real-life changes. As with so many resources, scientific expertise and technology, and therefore the benefit of science, are distributed incredibly unevenly among the world’s society. For example, malaria and AIDS drugs are still not reaching many of the people who need them, be it for financial, infrastructural, political, or organisational reasons. Diagnostic tests often require well-equipped labs and trained technicians. And while they are limited in their applications for research, the paperfuge and the foldscope have the potential to revolutionize diagnostics as well as education around the world. Cutting-edge research may require more sophisticated centrifuges that spin faster, microscopes that have better resolution, computers to store the images, and teams of scientists analyzing the data. But the frugal science approach is well-suited for the diagnosis of diseases, or to help a high school science class understand what cells are.

If you would like to find out more about the foldscope, check out Manu Prakash’s very cool TED talk. More information on Dr Sia’s mChip can be found here.

 

On Science and Values

 

By Rebecca Delker, PhD

 

In 1972 nuclear physicist Alvin Weinberg defined ‘trans-science’ as distinct from science (references here, here). Trans-science – a phenomenon that arises most frequently at the interface of science and society – includes questions that, as the name suggests, transcend science. They are questions, he says, “which can be asked of science and yet which cannot be answered by science.” While most of what concerned Weinberg were questions of scientific fact that could not (yet) be answerable by available methodologies, he also understood the limits of science when addressing questions of “moral and aesthetic judgments.” It is this latter category – the differentiation of scientific fact and value – that deserves attention in the highly political climate in which we now live.

Consider this example. In 2015 – 2016, action to increase the use of risk assessment algorithms in criminal sentencing received a lot of heat (and rightly so) from critics (references here, here). In an attempt to eliminate human bias from criminal justice decisions, many states rely on science in the form of risk assessment algorithms to guide decisions. Put simply, these algorithms build statistical models from population-level data covering a number of factors (e.g. gender, age, employment, etc.) to provide a probability of repeat offense for the individual in question. Until recently, the use of these algorithms has been restricted, but now states are considering expanding their utility for sentencing. What this fundamentally means is that a criminal’s sentence depends not only on the past and present, but also on a statistically derived prediction of future. While the intent may have been to reduce human bias, many argue that risk assessment algorithms achieve the opposite; and because the assessment is founded in data, it actually serves to generate a scientific rationalization of discrimination. This is because, while the data underpinning the statistical models does not include race, it requires factors (e.g. education level, socioeconomic background, neighborhood) that are, themselves, revealing of centuries of institutionalized bias. To use Weinberg’s terminology, this would fall into the first category of trans-science: the capabilities of the model fall short of capturing the complexity of race relations in this country.

But this is not the whole story. Even if we could build a model without the above-mentioned failings, there are still more fundamental ethical questions that need addressing. Is it morally correct to sentence a person for crimes not yet committed? And, perhaps even more crucial, does committing a crime warrant one to lose their right to be viewed (and treated) as an individual – a value US society holds with high regard – and instead be reduced to a trend line derived from the actions of others? It is these questions that fall into the second category of trans-science: questions of morality that science has no place in answering. When we turn to science to resolve such questions, however, we blind ourselves from the underlying, more complex terrain of values that make up the debate at hand. By default, and perhaps inadvertently, we grant science the authority to declare our values for us.

Many would argue that this is not a problem. In fact, in a 2010 TED talk neuroscientist Sam Harris claimed that “the separation between science and human values is an illusion.” Values, he says, “are a certain kind of fact,” and thus fit into the same domain as, and are demonstrable by, science. Science and morality become one in the same because values are facts specifically “about the well-being of conscious creatures,” and our moral duty is to maximize this well being.

The flaw in the argument (which many others have pointed out as well) is that rather than allowing science to empirically determine a value and moral code – as he argued it could – he presupposed it. That the well being of conscious creatures should be valued, and that our moral code should maximize this, cannot actually be demonstrated by science. I will also add that science can provide no definition for ‘well-being,’ nor has it yet – if it ever can – been able to provide answers to the questions of what consciousness is, and what creatures have it. Unless human intuition steps in, this shortcoming of science can lead to dangerous and immoral acts.

What science can do, however, is help us stay true to our values. This, I imagine, is what Harris intended. Scientific studies play an indispensable role in informing us if and when we have fallen short of our values, and in generating the tools (technology/therapeutics) that help us achieve these goals. To say that science has no role in the process of ethical decision-making is as foolish as relying entirely on science: we need both facts and values.

While Harris’ claims of the equivalency of fact and value may be more extreme than most would overtly state, they are telling of a growing trend in our society to turn to science to serve as the final arbiter of even the most challenging ethical questions. This is because in addition to the tangible effects science has had on our lives, it has also shaped the way we think about truth: instead of belief, we require evidenced-based proof. While this is a noble objective in the realm of science, it is a pathology in the realm of trans-science. This pathology stems from an increasing presence in our society of Scientism – the idea that science serves as the sole provider of knowledge.

But we live in the post-fact era. There is a war against science. Fact denial runs rampant through politics and media. There is not enough respect for facts and data. I agree with each of these points; but it is Scientism, ironically, that spawned this culture. Hear me out.

The ‘anti-science’ arguments – from anti-evolution to anti-vaccine to anti-GMO to climate change denial – never actually deny the authority of science. Rather, they attack scientific conclusions by either creating a pseudoscience (think: creationism), pointing to flawed and/or biased scientific reporting (think: hacked Climate data emails), clinging to scientific reports that demonstrate their arguments (think: the now debunked link between vaccines and autism), and by honing in on concerns answerable by science as opposed to others (think: the safety of GMOs). These approaches are not justifiable; nor are they rigorously scientific. What they are, though, is a demonstration that even the people fighting against science recognize that the only way to do so is by appealing to its authority. As ironic as it may be, fundamental to the anti-science argument is the acceptance that the only way to ‘win’ a debate is to either provide scientific evidence or to poke holes in the scientific evidence at play. Their science may be bad, but they are working from a foundation of Scientism.

 

Scientific truth has a role in each of the above debates, and in some cases – vaccine safety, for example – it is the primary concern; but too often scientific fact is treated as the only argument worth consideration. An example from conservative writer Yuval Levin illustrates this point. While I do not agree with Levin’s values regarding abortion, the topic at hand, his points are worth considering. Levin recounts that during a hearing in the House of Representatives regarding the use of the abortion drug RU-486, a DC delegate argued that because the FDA decided the drug was safe for women, the debate should be over. As Levin summarized, “once science has spoken … there is no longer any room for ‘personal beliefs’ drawing on non-scientific sources like philosophy, history, religion, or morality to guide policy.”

When we break down the abortion debate – as well as most other political debates – we realize that it is composed of matters of both fact and value. The safety of the drug (or procedure) is of utmost importance and can, as discussed above, be determined by science; this is a fact. But, at the heart of the debate is a question of when human life begins – something that science can provide no clarity on. To use scientific fact as a façade for a value system that accepts abortion is as unfair as denying the scientific fact of human-caused climate change: both attempts focus on the science (by either using or attacking) in an effort to thwart a discussion that encompasses both the facts of the debate and the underlying terrain of values. We so crave absolute certainty that we reduce complex, nuanced issues to questions of scientific fact – a tendency that is ultimately damaging to both social progress and society’s respect for science.

By assuming that science is the sole provider of truth, our culture has so thoroughly blurred the line between science and trans-science that scientific fact and value are nearly interchangeable. Science is misused to assert a value system; and a value system is misused to selectively accept or deny scientific fact. To get ourselves out of this hole requires that we heed the advice of Weinberg: part of our duty as scientists is to “establish what the limits of scientific fact really are, where science ends and trans-science begins.” Greater respect for facts may paradoxically come from a greater respect for values – or at the very least, allowing space in the conversation for them.

 

Can Chocolate be Good for You? The Dark and Light Side of the Force

By Jesica Levingston Mac leod, PhD

It is this time of the year again: San Valentin (aka Valentine’s Day) –  the best excuse to give and more importantly to EAT a lot of chocolate. But, maybe a better gift that receiving chocolate,  is to know that eating chocolate might be good for your health.

In the beginning chocolate was “created” as a medicine –  a healthy beverage –  around 1900 BC by Mesoamerican people. The Aztecs and Mayas gave it the name of “xocolatl”, it means bitter water, as the early preparations of the cacao seeds had an intense bitter taste. Almost one year ago, a longitudinal study, done in the US East Coast, connected eating chocolate with better cognitive function. Yay! Great news, right? The scientists gathered information over a period of 30 years (starting in 1976) from 968 subjects (aged 23-98 years) in the Syracuse-Maine area. The results showed that more frequent chocolate consumption was meaningfully associated with better performance on the global composite score, visual-spatial memory and organization, working memory, scanning and tracking, abstract reasoning, and the mini-mental state examination. Importantly, they pointed out that with the exception of working memory, these relations were not attenuated with statistical control for cardiovascular, lifestyle and dietary factors across the participants.

More good news arrived last summer: an Italian research team announced that flavanol-rich chocolate improves arterial function and working memory performance counteracting the effects of sleep deprivation. The researchers investigated the effect of flavanol-rich chocolate consumption on cognitive skills and cardiovascular parameters after sleep deprivation in 32 healthy participants, who underwent two baseline sessions after one night of undisturbed sleep and two experimental sessions after one night of total sleep deprivation. Two hours before each testing session, participants were assigned to consume high or poor flavanol chocolate bars. During the tests the participants were evaluated by the psychomotor vigilance task and a working memory task, systolic blood pressure (SBP) and diastolic blood pressure (DBP), flow-mediated dilation and pulse-wave velocity. As you might know, sleep deprivation increased SBP/DBP. The result was that SBP/DBP and pulse pressure were lower after flavanol-rich treatment respect to flavanol-poor treatment sleep deprivation impaired flow-mediated dilation, flavanol-rich, but not flavanol-poor chocolate counteracted this alteration. Flavanol-rich chocolate mitigated the pulse-wave velocity increase. Also, flavanol-rich chocolate preserved working memory accuracy in women after sleep deprivation. Flow-mediated dilation correlated with working memory performance accuracy in the sleep condition.

The European Food Safety Authority accepted the following statement for cocoa products containing 200 mg of flavanols: “cocoa flavanols help maintain the elasticity of blood vessels, which contributes to normal blood flow”. This statement means that flavanol-rich chocolate counteracted vascular impairment after sleep deprivation and restored working memory performance. In another study led by Columbia University Medical Center scientists,  dietary cocoa flavanols—naturally occurring bioactives found in cocoa—reversed age-related memory decline in healthy older adults. One possibility is that the improvement in cognitive performance could be due to the effects of cocoa flavonoids on blood pressure and peripheral and central blood flow. Following on this other chocolate attribute, it was shown than weekly chocolate intake may be beneficial to arterial stiffness.

But, there are some bad news!  A review of 13 scientific articles on this topic, provided evidence that dark chocolate did not reduce blood pressure. However, the reviewers claimed that there was an association with increased flow-mediated vasodilatation (FMD) and moderate for an improvement in blood glucose and lipid metabolism. Specifically, their analysis showed that chocolates containing around 100 mg epicatechin can reliably increase FMD, and that cocoa flavanol doses of around 900 mg or above may decrease blood pressure if consumed over longer periods: “Out of 32 cocoa product samples analyzed, the two food supplements delivered 900 mg of total flavanols and 100 mg epicatechin in doses of 7 g and 20 g and 3 and 8 g, respectively. To achieve these doses with chocolate, you will need to consume  100 to 500 g (for 900 mg flavanols) and 50 to 200 g (for 100 mg epicatechin). Chocolate products marketed for their purported health benefits should therefore declare the amounts of total flavanols and epicatechin”.  The method of manufacturing dark chocolate retains epicatechin, whereas milk chocolate does not contain substantial amounts of epicatechin.

The first epidemiological “indication” for beneficial health effects of chocolate were found in Kuna natives in Panama with low prevalence of atherosclerosis, type 2 diabetes, and hypertension. This fact correlated with their daily intake of a homemade cocoa. These traits disappear after migration to urban and changes in diet.

 

There are many  claims about the potential health benefits of chocolate, including anti-oxidative effect by polyphenols, anti-depressant effect by high serotonin levels, inhibition of platelet aggregation and prevention of obesity-dependent insulin resistance. Chocolate contains quercetin, a powerful antioxidant that protects cells against damage from free-radicals. Chocolate also contains theobromine and caffeine, which are central nervous system stimulants, diuretics and smooth muscle relaxants, and valeric acid, which is a stress reducer. However, chocolate also contains sugar and other additives in some chocolate products that might not be so good for your health.

 

Oh well, maybe the love of chocolate is like any other romantic affair: blind and passionate. Apparently, the beneficial dosage is 10 g of dark chocolate per day (>70% cocoa), so enjoy it as long as the serotonin boost for rewarding yourself with a new treat last.

 

Happy Valentine’s Day!

 

 

A Short History of Fast Radio Bursts

 

By JoEllen McBride, PhD

Humans have gazed at the stars since the beginning of recorded history. Astronomy was the first scientific field our distant ancestors recorded information about. Even now, after thousands of years of study, we’re still discovering new things about the cosmos.

Fast radio bursts (FRBs) are the most recent astronomical mystery. These short-lived, powerful signals from space occur at frequencies you can pick up with a ham radio. But don’t brush the dust off your amateur radio enthusiast kit just yet. Although they are powerful, they do not occur frequently and happen incredibly fast. Which is exactly why astronomers only recently noticed them. The first FRB was discovered in 2007 from data taken in 2001. The majority of FRBs are found in old data. Their short duration meant astronomers overlooked them as background signals but closer inspection revealed a property unique to radio signals originating from outside our galaxy.

 

Signal or Noise?

Radio signals are light waves with very long wavelengths and low frequencies. Visible light (the wavelengths of light that bounce off objects, hit our eyes and allow us to see) happens on wavelengths that are a few hundred times smaller than the thickness of your hair. The wavelength of radio waves can be anywhere from a centimeter to kilometers long. The longer the wavelength, the lower the frequency  and more the signal is delayed by free-floating space particles. This is because space is not a perfect vacuum. There is dust, atoms, electrons and all kinds of small particles floating around out there. As light travels through space, it can be slowed down by these loitering particulates. Larger distances mean more chances for the light to interact with particles and these interactions are strongest at the lowest frequencies where radio waves happen.

Radio signals from within our own galaxy are close enough that they are not really affected by this delay. But sources far outside of the Milky Way have very large distances to travel so by the time the signal reaches our telescopes, it has interacted with many particles. This produces a streak or a ‘whistle’ where the higher radio frequencies in the signal reach our telescopes first and the lower ones arrive shortly afterwards.

When astronomers started noticing these whistles at unexpected frequencies, they no longer believed they were background noise but signals from the far reaches of space. They needed another piece to the puzzle though to determine exactly what was causing these interstellar calls.

 

It Takes Two to Find a FRB

The signals discovered in previous data appeared to be one-and-done events, which meant they could not be observed again with a bigger telescope to get a more precise location. Without a precise position on the sky, astronomers couldn’t tell where the signals were coming from, so had no idea what was producing them. What astronomers needed was a signal detected by two different telescopes at the same time. One telescope to broadly search for the signal and a second, much larger telescope to accurately determine its location. So they began to meticulously watch the sky for new FRBs. The first real-time observation of an FRB was in May of 2014. Although it was observed by only one telescope so its precise location was unknown, it gave astronomers a way to detect future ‘live’ bursts. In May and June of 2015 a search by another team of astronomers yielded the first ever repeating FRB.

The Arecibo radio telescope (yes the one from Goldeneye) detected the first signals then they requested follow-up observations from the Very Large Array to more precisely pin-down the location. Once they had a location, yet another team of astronomers could take pictures at visible frequencies to see what was lurking in that region of space. They found a teeny tiny galaxy, known as a dwarf galaxy, at a distance of 3 billion light years from Earth. This galaxy is full of the cold gas necessary to create new stars which means many stars are being born and the huge, bright ones are living quickly and dying.

 

Who or What is Calling Us?

Where the FRBs are coming from is important because it allows astronomers to pick between the two plausible theories for what causes FRBs. The energy produced by these bursts is impressive, so the most likely culprits take us into the realm of the small and massive: supermassive black holes (SMBH) and neutron stars. One idea suggests that FRBs could be the result of stars or gas falling into the SMBH at the center of every galaxy. If this were the case, we would expect the FRBs to occur in the central regions of a galaxy, not near the edges. Neutron stars, on the other hand, are formed after the death of massive stars. These stars are typically 10 to 30 times more massive than our Sun, so do not live for long. Astronomers expect a galaxy creating lots of new stars to also create lots of neutron stars as the most massive stars die first. Star formation can occur anywhere in a galaxy but is most commonly observed in the outer regions.

This repeat FRB is located pretty far from the center of a galaxy going through a period of intense star birth so this lends credence to neutron stars being the source. Of course, we are looking at a single data point here. There is no reason to suspect that there is a single cause for FRBs. We need more real-time observations of FRBs so we can figure out where they are located and whether or not they always come from dwarf galaxies. FRB searches have been added to three radio frequency surveys, known as CHIME, UTMOST and HIRAX, that will detect and locate these powerful signals with great precision.

It looks like we can continue to look forward to another few millennia of cosmic discoveries.

End Crisis, Bridges and Scattered Genes: Chromatin Bridges and their Role in Genomic Stability

By Gesa Junge, PhD

Each of our cells contains about two meters of DNA which needs to be stored in cells that are often less than 100uM in diameter, and to make this possible, the DNA is tightly packed into chromosomes. As the human cell prepares to divide, the 23 pairs of chromosomes neatly line up and attach to the spindle apparatus via their middle point, the centrosome. The spindle apparatus is part of the cell’s scaffolding and it pulls the chromosomes to opposite ends of the cell as the cell divides, so that every new daughter cell ends up with exactly one copy of each chromosome. This is important; cells with more or less than one copy of a chromosome are called aneuploid cells, and aneuploidy can lead to genetic disorders such as Down Syndrome (three copies of chromosome 21).

In some cancer cells, chromosomes with two centromeres (dicentric chromosomes) can be detected, which can happen when the ends of two chromosomes fuse in a process called telomere crisis. Telomeres are a sort of buffer zone at the ends of the chromosome which consist of repeats of non-coding DNA sequences, meaning there are no genes located here. As one of the DNA strands is not replicated continuously but in fragments, the telomeres get shorter over the lifespan of a cell, and short telomeres can trigger cell cycle arrest before the chromosomes get so short that genetic information is lost. But occasionally, and especially in cancer cells, chromosome ends fuse and a chromosome becomes dicentric. Then it can attach to the spindle apparatus in two points and may end up being pulled apart as the two daughter cells separate, sort of like a rope tied to two cars that drive in opposite directions. This string of chromosome is referred to as a chromatin bridge.

Researchers at Rockefeller University are studying these chromatin bridges and what their relevance is for the health of the cell. A paper by John Maciejowski and colleagues found that the chromatin bridges actually stay intact for quite a long time. Chromosomes are pretty stable, and so the chromatin bridges lasted for an average of about 9 hours (3-20h) before snapping and quickly being pulled back into the original cell (see video). Also, the nucleus of the cell was often heart-shaped as opposed to the usual round shape, which suggests that the chromatin bridge physically pulls on the membrane surrounding the nucleus, the nuclear envelope. Indeed, proteins that make up the nuclear envelope (e.g. LAP2) were seen on the chromatin bridge, suggesting that they take part of the nuclear envelope with them as they divide.  Also, cells with chromatin bridges had temporary disruptions to their nuclear envelope at some point after the bridge was resolved, more so than cells without chromatin bridges.

The chromatin bridges also stained positive for replication protein A (RPA), which binds single stranded DNA. DNA usually exists as two complementary strands bound together, and the two strands really only separate to allow for DNA to be copied or transcribed to make protein. Single-stranded DNA is very quickly bound by RPA, which stabilises it so it does not loop back on itself and get tangled up in secondary structures. The Rockefeller study showed that a nuclease, a DNA-eating enzyme, called TREX1 is responsible for generating the single-stranded DNA on chromatin bridges. And this TREX1 enzyme seems to be really important in resolving the chromatin bridges: cells that do not have TREX1 resolve their chromatin bridges later than cells that do have TREX1.

So how are chromatin bridges important for cells, the tissue and the organism (i.e. us)? The authors of this study suggest that chromatin bridges can lead to a phenomenon called chromothripsis. In chromothripsis, a region of a chromosome is shattered and then put back together in a fairly random order and with some genes facing the wrong direction. Think of a new, neatly color-sorted box of crayons that falls on the floor, and then someone hastily shoves all the crayons back in the box with no consideration for color coordination or orientation. Chromothripsis occurs in several types of cancers, but it is still not really clear how often, in what context and exactly how the genes on a chromosome end up in such a mess.

According to this study, chromothripsis may be a consequence of telomere crisis, and chromatin bridges could be part of the mechanism: A chromosome fuses ends with another chromosome and develops two centromeres. The dicentric chromosome attaches to two opposite spindles and is pulled apart during cell division, generating a chromatin bridge which is attacked by TREX that turns it into single-stranded DNA, the bridge snaps and in the process the DNA scatters, and returns to the parent cell where it is haphazardly reassembled, leaving a chromothripsis region.

The exact mechanisms of this still need to be studied and the paper mentions a few important discussion points. For example, all the experiments were performed in cell culture, and the picture may look very different in a tumor in a human being. And what exactly causes the bridge to break? Also, there are probably more than one potentially mechanism linking telomere crisis to chromothripsis. But it is a very interesting study that shines some light on the somewhat bizarre phenomenon of chromothripsis, and the importance of telomere crisis.

Reference: Maciejowski et al, Cell. 2015 Dec 17; 163(7): 1641–1654.

 

 

Forging a Connection Between “Doing” and “Feeling”: How Behavioral Activation Therapy Can Alleviate Depression

 

By Lauren Tanabe, PhD

 

A few weeks ago, I stumbled across a short description of a recent study published in The Lancet out of the University of Exeter: researchers found that behavioral activation (BA) therapy works as well as cognitive behavioral therapy as therapeutic intervention for depression. Cognitive behavioral therapy (CBT) has been previously shown to be as effective as antidepressants.

 

According to the Society of Clinical Psychology, depression may cause people to “disengage from their routines and withdraw from their environment.” Over time, this isolating avoidance behavior can intensify depression as people “lose opportunities to be positively reinforced through pleasant experiences, social activity, or experiences of mastery.” Behavioral activation therapy aims to alter the patient’s avoidance behavior by increasing exposure to “sources of reward.” As well as by helping people to understand the connection between their behavior and their mood.

 

In lay-terms, activity will influence how you feel. If you sit at home alone, this may worsen depression. If you coax yourself to engage in some kind of social activity, or to work towards a goal (chores, hobby, work), this may lessen depression symptoms.

 

This seems straightforward enough. When I first learned of the study, scrolling through a blurb in Scientific American entitled, Depressed? Do What You Love, I must admit, I audibly scoffed, Really? We need a study to tell us this? At the time, it seemed rather obvious and mostly common-sense that doing what you love would lead to feelings of happiness (or if not happiness, a lessening of depression). I reached out to the lead author on the study, Dr. David Richards of Exeter University and proceeded to pose question after skeptical question. Dr. Richards patiently and thoroughly answered each one. I was most curious about how he would respond to one question in particular:

 

Some might say that it’s not surprising that doing what you enjoy can ward off depression. Why do we need a study to tell us this?

 

“If it were that obvious, then why would we have got to the point of recommending complex therapies like CBT [cognitive behavioral therapy] which focus on changing the way we think? Or why wouldn’t people have figured it out for themselves? … BA is not just doing what you enjoy. It is increasing the opportunities for positive reinforcement and reducing avoidance caused by aversive experiences. Depression is self-reinforcing and before you know it you can find yourself in a position where you cannot see a way out, just by having started on what at the time seemed like a sensible path of avoiding things you don’t like. Although there is an element of the common sense to BA that you suggest, in actual fact people often get stuck and what BA does is help them make some important connections between activity and mood which then leads to a personalised programme of re-activation …

 

As I read his email, my emotions ranged from incredulous to enlightened.  I mulled over his words in the following days. Perhaps, like most who suffer from depression, I want to believe that I am actively doing what I can to wriggle my way out of its clutches. Especially since it often takes an inordinate amount of effort and cognitive calisthenics for me to admit to myself (or anyone else) that I need help in the first place. I’ve painstakingly evaluated my thoughts and actions with a therapist and I know in great detail why I’m depressed. I’ve finally filled that antidepressant prescription, that old, familiar frenemy I hate to get in touch with again, after so many independent years. These actions should be enough to cure me. And yet, each morning, as I wash down my pill, vicious thoughts gnaw into me for being weak, followed by the washing over of a listless acceptance in the belief that I am broken, followed by the eking seepage of a meek hope. That tiny bit of hope – that these tyrannical thoughts will dissipate and I’ll finally be free – carries me through the day. The daily ritual of self-flagellation even (especially) for seeking help is simply exhausting. So, maybe there is something more I could be doing to help myself.

 

Dr. Richards went on to write, “Western tradition often stresses that if we are ‘ill’ we must cure the sickness inside us before taking our place in the world again. What BA does is tell people that they do not need to do this. So although you might think that is common sense, you would be surprised at how many people are applying a ‘fix me first’ principle and are surprised by the BA rationale …”

 

Behavioral activation therapy highlights a subtle, yet significant, shift in how treatment for depression is viewed, in general. A common analogy used in describing this type of therapy is that it works from the “outside-in” rather than the “inside-out.” That is, if you’re depressed you don’t wait to feel better and then participate in fulfilling activities (a common and somewhat intuitive strategy). Rather, the participation in meaningful work will alter your outlook and mitigate the depression. This, I could relate to.

 

I could recall myriad examples of times when I knew that sitting on the couch and binge-watching bad TV or going to bed at 7 pm was not going to lead to fulfillment of any kind, and much more likely just make me feel worse about myself, but I did it anyway. Why? Likely a strange dichotomy of wanting to make myself feel better from a quick-fix of escapism coupled with a twisted hatred of myself – I couldn’t possibly excel at anything other than existing as a gluttonous zombie, so why bother? And then, of course, there is not wanting to be a burden to others or to bring them down. Practicing self-imposed isolation in order to avoid becoming the archetypal “Debbie Downer” feels necessary to preserve relationships and save face.

 

But, clearly, this approach doesn’t work for most. It certainly didn’t for me.

 

The Exeter study was a well-controlled, randomized analysis of over 400 men and women who either received CBT or BA therapy. One year after treatment, both groups reported at least a 50% reduction in symptoms and were equally likely to experience remission. Both groups also contained some participants already taking antidepressants (ADs).

 

I asked Dr. Richards if he thought that being on medication could make someone more receptive to the therapy. He did not believe so, “We stratified the randomisation to ensure both groups had the same likelihood of being on ADs. The key thing is that for most of them, the drugs had not worked, evidenced by the fact that they had been on them for a considerable while before starting BA. We chose this because this is the reality of clinical services – psychological therapists have to work with patients who are on tablets as well as undergoing therapy. It’s the real world.”

 

The real world is replete with people suffering from depression (approximately 350 million worldwide). Of those, many do not have access to adequate treatment. According to the study, BA therapy is a more cost-effective option (about 20% less expensive than CBT), as treatment can be delivered by less specialized health workers. This is important. Wide-scale treatment options are critical, especially in low income countries where the treatment gap can be as much as 80 – 90%.

 

When I first read about BA, I mistakenly thought the goal was to do what makes you happy. But this is not the case. I asked Dr. Richards about this: “It’s not at all about making you happy. It’s about the function of behaviour in the short- and long-term. People learn to see the connection between activity and mood and choose activities where their experience is that this will be a more positive experience – achieving things, reducing avoidance.”

 

When asked why he believes BA therapy works, Dr. Richards responded, “It is because what we do has a profound connection with how we feel. Experiencing this connection is the core.”

 

I think I’ll be adding aspects of BA therapy to my current repertoire. As much as I sometimes want to avoid others, I’ll make the extra (albeit sometimes painful) effort to socialize with friends and to engage in tasks that “rational me” knows will lead to fulfillment (even if “depressed me” fights it). It will be a slow process, but no better time than the new year to forge new habits, new behaviors, and hopefully, resurrect a happier version of myself.

Happy Science: Turn That Frown Upside Down

 

By Lori Bystrom, PhD

Although 2016 may have been rough for some, most of us are still optimistic about 2017. We all want to start the New Year happy. Fortunately, more and more businesses, institutions, and even countries and cities are focusing on happiness. Companies, such as Happy City in Bristol, England, aim to keep people content. The Center for Health and Happiness at Harvard is gaining a lot of interest (see The Atlantic  article). Many countries and cities are being ranked based on happiness (check out the happy planet index). And just look at the United Arab Emirates, which appointed its first “minster for happiness” early in 2016. All of this sounds wonderful and promising, but what about the happiness of scientists?

 

I am not sure that many people would equate “science” with “happiness.” As an academic scientist, I recall happy moments of scientific discovery, but those are memorable partly because they are rare. I particularly recall one glorious moment during the last month of a project — after many failures and nearly passing out — when I was finally able to get publishable results. Oh how long I waited for that happy day!  More often, however, the scientific work environment can be tough, especially in academia, with little positive feedback, low pay, and competitive/high pressure work. This is all compounded with constant trials and tribulations of everyday science, all of which can take a toll on the mental health of scientists.

 

It may seem grim to view scientific work as so difficult, but this does not stop many scientists from loving science. In fact, I think science would probably be very boring if amazing results were so easy to achieve all the time (well, it would be a little bit exciting). I think many scientists would agree that there is something appealing about the challenge of discovering and exploring the unknown in order to reach that happy eureka moment. That being said, though, science research does not need to be as depressing as it is. This is to say: there is much that could be done to make scientists less unhappy.

 

So what can be done to make or keep scientists more content with their job? More money is always nice, of course, but this is not always the answer, nor is it always feasible. There are, however, other approaches that may improve work conditions for scientists. Here are a few ideas for promoting happiness in the lab, especially for academic scientists.

Listen to all; respect all

This may seem obvious, but this was often a major problem in the labs where I have worked.  I think it is always good to listen to what everyone has to say about a project. I have witnessed people at all levels in the academic institution fail to encourage this kind of healthy discourse. Everyone should have a voice. Not only that, but scientific discovery comes in many shapes and forms. Even a young scientist may see something that a more experienced scientist might not, because they are too lost in the detail. Different perspectives may help shed light on problems that initially seem too complex to solve.

 

Excluding people from intellectual discussion hinders the creative flow in research and leads to unhappy scientists. Most researchers would like to provide some feedback about projects in the lab.  Scientists should not be treated like robots. This   only makes scientists angry. And once a few colleagues are upset, they are liable to take it out on everyone else, leading to a lab infested with unhappy scientists.

 

Define and communicate expectations

All mentors should have their students and/or employees define what kind of expectations they have of each other at the beginning of a project. These expectations should be reviewed over time, as they will likely change. You may not see eye to eye with your mentor, but they are also not mind readers. This is why expectations from both sides should be reviewed. And if you are not a good match then it is better to find out early.

 

In addition to short-term goals, both mentor and mentee should also define their long-term goals. Not everyone wants to go to medical school, be a PI/advisor, or stay in the lab, and therefore scientists should work on projects that are relevant to their long-term and short-term interests. Furthermore, if you need advice you should ask, and if someone asks for advice, you should respond. No one has a crystal ball and so dead silence does not help anyone. Ultimately, good communication prevents miscommunication and an unhappy lab environment.

 

Aim to achieve little goals

It is not reasonable for everyone in a lab to expect to be the first author on a paper in a high-ranking journal after a short stint in research (I have had many summer students expect this!) For this reason, I think it is important for scientists to aim for small goals, especially at the beginning of a project or career. By starting out small you can build a strong foundation for a big project. For students that are new to a particular field of science, or even science at all, it may be good  for them to work on projects already in progress by developing small and relevant side projects, as well as providing them small incentives (e.g., name on poster or something that they can put on their CV). This way there is less confusion in the lab and the outcome of the project is more likely to keep everyone happy.

 

More senior scientists can also benefit from taking on smaller projects within or outside of their own research. This can help because many scientists may not see the light at the end of the tunnel when their project is not working as planned. In such an eventuality, having smaller projects on the side allows them to take a break from their main project by using their expertise in small doses to help other projects, especially those close to being finished. This, in turn, may help them better visualize their problem, keep their publication record up, and boost their morale.

 

Keep everything organized and transparent

I think organization is crucial in a lab. This not only pertains to cleaning the lab space, decluttering lab supplies and maintaining instruments, but also to finances. It would be nice if someone at the institution other than the PI had to deal with this. Unfortunately, this is not usually the case. It is a lot to ask that one person manage research, lab drama, and also lab finances. I find that academic scientists (myself included) are not trained how to do this very well. Therefore, it may be good if everyone in the lab sat down and discussed what is needed for various projects in order to make sure the money is well spent. If financial conditions are becoming a problem in lab — as they often are, even in the most prosperous labs — it may also be good to have everyone write a budget for their project. Transparency about the financial situation of the lab also helps people understand the state of the lab (and perhaps the mood of their PI/boss), and may encourage scientists to think again about paying extra for something not necessary (do we really need pink pipettes?). A lab that is organized and financially transparent helps prevent unnecessary stress and avoids a lot of unnecessary resentment.

 

Don’t forget to socialize… and take a break!

I know that I could not have survived my lab experiences without the support of other fellow scientists. I found that lunchtime was something I looked forward to every day because not only could I eat (nom nom), but I could vent about experiment problems, laugh, or learn from other colleagues. I think all mentors should encourage this and celebrate any scientific achievement with some kind of social event, even if it only occurs during lab meeting. A friendly and social environment makes for happier scientists.

 

Additionally, overworking scientists leads to less efficient and productive work. Before a big deadline is about to approach, it might be good for scientists to take a few hours off to recompose themselves. This might be a good time to take a walk or eat a nice dinner. Otherwise, if there is no time for that it might be good to take a longer vacation after the deadline is over so you can come back more refreshed and ready to tackle more challenging moments in science. I know I have denied myself vacations many times because I thought maybe if I just kept on working I would get the data I needed sooner. Unfortunately, this often led to less productive work. This is why it might be good to enforce that all researchers take vacations, especially after stressful periods.

 

Perhaps in the future, there will be organizations to help manage the happiness of scientists, although I am not holding my breath for something like this to appear anytime soon. Maybe the first step, however, is acknowledging that there is a lot of unnecessary unhappiness in the lab and that we should try to do something about it. The future of science will be better if we keep scientists smiling 🙂