A Spooky Story – How Science Became Science Fiction?

 

By JoEllen McBride, PhD

 

Which came first, science or science fiction? Today it is difficult to tell which has a greater influence on the other. But before the invention of the battery, scientists relied on uncontrollable static discharges to produce electricity and used only the facts in front of them to come up with new scientific ideas. The events surrounding the creation of a chemically produced source of electricity not only transformed the fields of chemistry, physics and biology; they ushered in a new genre of literature known as science fiction– providing a new way to motivate scientists and scientific advances.

 

The tale begins as any good sci-fi story does. In 1780, the effects of static electricity on living creatures intrigued the Italian scientist, Luigi Galvani. In order to create a static charge he had to rub frog skin together. Once a charge built up, he would apply it to the skinless frogs and record the results. One day while skinning frogs, he inadvertently charged a metal scalpel lying close to where he worked. When he touched the now electrically charged scalpel to the sciatic nerve of a dead frog, the frog’s leg kicked! This reanimation led Galvani to postulate that motion in living things is controlled by electricity that flows from the nerves to the muscles.

 

Galvani’s frenemy, the physicist Alessandro Volta, had a different hypothesis. Volta knew that Galvani would hang his frogs up by different types of metal wires. He speculated that the different metallic properties of the wires combined with the moist environment of the frog’s muscles transmitted the electricity from the scalpel into the muscles, causing the leg to move. He verified his hypothesis by replacing a frog leg with cloth soaked in brine and recorded an electric current through the attached wires. Volta believed he had disproved ‘galvanism’ and spent much of his life debating its merits with Galvani. Today we have an entire field of physiology known as electrophysiology. So while Galvani may not have discovered ‘animal electricity’ when he reanimated his frog leg, his hypothesis was not far from the truth.

 

When Volta wasn’t bashing galvanism, he spent his time tweaking his frog leg circuit to produce electricity without friction. For as long as they could remember, scientists had to spend time and energy generating static electricity and storing it in Leyden jars— glass jars with metal foil lining their inner and outer surfaces. The size of the jar limited the amount of electricity stored and the electrical output could not be controlled. Around 1800, Volta discovered that if he interleaved enough zinc, copper and brine soaked cloth he could produce a steady and usable amount of electricity without the need of friction or jars. Volta’s invention provided an independent and controllable electric source and many scientists rushed to replicate his results.

 

In 1808, two British scientists, William Nicholson and Anthony Carlisle, were constructing their own voltaic pile and needed a way to measure the electricity produced. They tried to connect their electroscope to the battery but did not have a reliable connection. They decided to use water as an intermediary between the electroscope contacts and the battery. But when they hooked up the circuit, the water would instantly vanish!

 

Being scientists, they knew this wasn’t witchcraft. After a few tests they confirmed that the water was not disappearing but being decomposed into oxygen and hydrogen. They had discovered electrolysis. Many scientists, most notably Sir Humphrey Davy, would go on to decompose other molecules and discover new elements such as potassium, sodium, calcium and magnesium. Davy would eventually hire Michael Faraday as his apprentice. Faraday would soon transform the fields of electrostatics and magnetism from his studies of electricity.

 

People in intellectual circles were aware of the fascinating scientific findings of Galvani, Volta, Nicholson, Carlisle and Davy. Born around the time that Volta made his first battery, Mary Shelley spent her entire life in the company of intellectuals. She hungered for knowledge at a young age and eventually became a prolific writer. She very likely read Davy’s book Elements of Chemical Philosophy, published in 1812, as her husband owned a copy and they enjoyed studying together.

 

While on holiday with her husband during the summer of 1816, their friend Lord Byron proposed that they all write their own ghost stories. Shelley grew anxious as nights passed and she still could not come up with a story. A few nights later, the group discussion turned to what gives beings life. Shelley suggested that electricity could be used to reanimate a corpse since ‘galvanism’ had been shown to give dead creatures motion. That very night, unable to sleep, her mind focused on reanimation and subconsciously fueled by her own scientific knowledge, it’s no wonder her ‘waking dream’ included visions of a monster brought to life by science.

 

In her telling, Dr. Frankenstein did not use electricity to animate his monster. That interpretation would first appear in the 1931 film and every telling after. But the influences of galvanism are clear. Unfortunately for all the Dr. Frankensteins and Frankenweenies out there, electrophysiology tells us that electrical signals are detected in cells, muscles and organs throughout a living body. This means it would be impossible to reanimate a dead creature with a jolt of electricity.

 

Mary Shelley’s Frankenstein created a new genre of storytelling. Science fiction authors are motivated by recent scientific findings to explore further applications and possibilities. Science fiction stories, in turn, have influenced many young people to pursue careers in scientific fields. So this Halloween when you’re watching Frankenstein or playing Captain Kirk as you ask your phone what the weather will be like for trick or treating; remember it’s all possible because an Italian scientist accidentally electrocuted some frog legs.

 

 

Can we reprogram adult cells into eggs?

 

By Sophie Balmer, PhD

 

Oogenesis is the female process necessary to create eggs ready for fertilization. Reproducing these keys steps in culture constitutes a major advance in developmental biology. Last week, a scientific group from Japan amazingly succeeded and published their results in the journal Nature. They replicated the entire cycle of oogenesis in vitro starting from adult skin cells. Upon fertilization of these in vitro eggs and transfer in adult females, they even obtained pups that grew normally to adulthood providing new platforms for the study of developmental biology.

 

Gamete precursor cells first appear early during embryonic development and are called primordial germ cells. These precursors then migrate to the gonads where they will remodel their genome via two rounds of meiosis to produce either mature oocytes or sperm depending on the sex of the embryo. For oocyte maturation, these two cycles occur at different times: the first one before or shortly after birth and the second one at puberty. The second round of meiosis is incomplete and the oocytes remain blocked in metaphase until fertilization by male gametes. This final event initiates the process of embryonic development, therefore closing the cycle of life.

 

Up until last week, parts of this life cycle were reproducible in culture. For years, scientists have known how to collect and culture embryos, fertilize them and transfer them to adult females to initiate gestation. This process called in vitro fertilization (IVF) has successfully been applied to humans and has revolutionized the life of millions of individuals suffering specific infertility issues and allowing them to have babies. However only a subset of infertility problems can be solved by IVF.

Additionally, in 2012, the same Japanese group recreated another part of the female gamete development: Dr. Hayashi and colleagues generated mouse primordial germ cells in vitro that once transplanted in female embryos recapitulated oogenesis. Both embryonic stem (ES) cells or induced pluripotent stem (iPS) cells were used for such procedure. ES cells can be derived from embryos before their implantation in the uterus and iPS cells are derived by reprogramming of adult cells. Finally, a couple of months ago, another group also reported being able to transform primordial germ cells collected from mouse embryos into mature oocytes.

 

However, replicating the full cycle of oogenesis from pluripotent cell lines in a single procedure constitutes an unprecedented discovery. To achieve this, they proceeded in different steps: first, they produced primordial germ cells in vitro from either skin cells (following their de-differentiation into iPS cells) or directly from ES cells. Second, they produced primary oocytes in a specific in vitro environment called “reconstituted ovaries”. Third, they induced maturation of oocyte up until their arrest in meiosis II. This process took approximately the same time as it would take in the female mouse and it is impressive to see how the in vivo and in vitro oocytes are indistinguishable. Of course, this culture system also produced a lot of non-viable eggs and only few make it through the whole process. For example, during the first step of directed differentiation, over half of the oocytes contain chromosome mispairing during meiosis I, which is about 10 times more than in vivo. Additionally, only 30% complete meiosis I as shown by the exclusion of the 1st polar body. However, analysis of other parameters such as the methylation pattern of several genes showed that maternal imprinting was almost complete and that most of the mature oocytes had normal number of chromosomes. Transcription profiling also showed very high similarities between in vitro and in vivo oocytes.

The in vitro oocytes were then used for IVF and transplanted into mouse. Amazingly, some of them developed into pups that were viable, grew up to be fertile and had normal life expectancy without apparent abnormalities. However, the efficiency of such technique is very low as only 3.5% of embryos transplanted were born (compare to over 60% in the case of routine IVF procedures). Embryos that did not go through the end of the pregnancy showed delayed development at various stages, highlighting that there are probably conditions that could be improved for the oocytes to lead to more viable embryos.

Looking at the entire process, the rate of success to obtain eggs ready for transplant is around 7-14% depending on the starting cell line population. Considering how much time these cells spend in culture, this rate seems reasonably good. However, as mentioned above only few develop to birth. Nonetheless, this work constitutes major advancement in the field of developmental biology and will allow researchers to look in greater detail at the entire process of oogenesis and fertilization without worrying about the number of animals needed. We can also expect that, as with every protocol, it will be fine-tuned in the near future. It is already very impressive that the protocol led to viable pups from 6 different cell line populations.

 

Besides its potential for increasing knowledge in the oogenesis process, the impact of such research might reach beyond the scope of developmental biology. Not surprisingly, these results came with their share of concerns that soon this protocol would be used for humans. How amazing would it be for women who cannot use IVF to use their skin cells and allow them to have babies? Years ago, when IVF was introduced to the world, most people thought that “test-tube” babies were a bad idea. Today, it is used as a routine treatment for infertility problems. However, there is a humongous difference between extracting male and female gametes and engineering them. I do not believe that this protocol will be used on humans any time soon because it requires too many manipulations that we still have no idea how to control. Nonetheless, in theory, this possibility could be attractive. Also, for the most sceptic ones, one of the major reason why this protocol is not adaptable to human right now is that we cannot generate human “reconstituted ovaries”. This step is key for mouse oocytes to grow in vitro and necessitate to collect the gonadal somatic cells in embryos which is impossible in humans. So, until another research group manages to produce somatic gonadal cells from iPS cells, no need to start freaking out 😉

 

 

In the Life of a Cell

An introduction to the cell cycle

 

By Johannes Buheitel, PhD

Omnis cellula e cellula”. We all heard or read this sentence probably sometime during college or grad school and no, it’s not NYU’s university motto. This short Latin phrase, popularized by the German physician/biologist Rudolf Virchow, states a simple fact, which, however, represents a fundamental truth of biology: “All cells come from cells”. It’s so fundamental that we often take it for granted that the basis for all of those really interesting little pathways and mechanisms that we study is life itself; and, moreover, that life is not simply “created” from thin air but can actually only derive from other life. Macroscopically, you (and Elton John) might call this “the circle of life” but microscopically, we’re talking about nothing less than the cell cycle. But what is the cell cycle exactly? What has to happen when and how does the cell maintain this order of events?

The cell cycle’s main purpose is to generate two identical daughter cells from one mother cell by first, duplicating all its genetic content in order to get two copies of each chromosome (DNA replication), and then carefully distributing those two copies into the newly forming daughter cells (mitosis and cytokinesis). These two major chromosomal events take place during S phase (DNA replication) and M phase (mitosis), which during consecutive cycles alternate, separated by two “gap” phases (G1 between M and S phase and G2 between S and M phase; FYI: everything outside M phase is also sometimes also called interphase). It goes without saying that the temporal order of events, G1 to S to G2 to M phase, must be maintained at all times; just imagine trying to divide without previously having replicated your DNA! And not only the order is important, but each phase must also be given enough time to faithfully fulfill its purpose. How is this achieved?

If you want to boil it down, there are two main principles that drive the cell cycle: timely expression and degradation of key proteins and irreversible switch-like transitions, called checkpoints. So let’s try and get an overview over each of these principles.

Cell-cycle
Overview over the different phases of the cell cycle: G1 (“gap”) phase, S (“synthesis”) phase, G2 phase and M (“mitosis”) phase.

In the early eighties, a scientist called Tim Hunt performed a series of experiments, unknowing that these will turn into a body of work, which will ultimately win him a Nobel prize. For these experiments, he radioactively labeled proteins in sea urchin embryos (yes, you read correctly!) and stumbled across one that exhibited an interesting pattern of abundance over time in that it appeared and vanished in a fashion that was not only cyclic but also seemed to be in sync with the embryos’ division cycles. Dr. Hunt had just found the first member of a protein family, which later turned out to be one of the main drivers of the cell cycle: the cyclins. What do cyclins do? Cyclins are co-activators of cyclin-dependent kinases or Cdks, whose job it is to phosphorylate certain target proteins in order to regulate their function in a cell cycle-dependent fashion. Since Cdks are pretty much around all the time, they need the cyclins to tell them, when to be active and when not to be. There are a variety of different Cdks, which interact very specifically with various cyclins. For example, cyclin D interacts with Cdk4 and 6 to drive the transition from G1 to S phase, while a complex between cyclin B and Cdk1 is required for mitotic entry. This system allows for enough complexity to explain how the proper length of each phase is assured (slow accumulation of a specific cyclin until the respective Cdk can be fully activated), but also how the correct order of events is maintained; because it turns out that the expression of, say, the cyclin assigned to start replication (cyclin E) is dependent on the activity of the Cdk/cyclin complex of the previous phase (in this example: Cdk4/6-cyclin D) via phosphorylation-dependent regulation of transcription factors.

The second principle I was talking about, are checkpoints. A checkpoint is a way for the cell to take a short breath and check if things are running smoothly so far, and if they are not, to halt the cell cycle in order to give itself some time to either resolve the issue or, if that’s not working out, throw in the towel (i.e. apoptosis). Researchers describe more and more checkpoint-like pathways that react to different stimuli all over the cell cycle, but canonically, we distinguish three main ones: the restriction checkpoint at the G1 to S phase transition, the DNA damage checkpoint at the G2 to M phase transition and the spindle assembly checkpoint (SAC) during mitosis at the transition from metaphase to anaphase. What do these checkpoints look for, or in more technical words, what requirements have to be met in order for a checkpoint to become satisfied? The restriction checkpoint integrates a variety of internal and external signals, but is ultimately satisfied by proper activation of S phase Cdk complexes (see above). The DNA damage checkpoint’s main function is to give the cell time to correct DNA damage, which naturally occurs during genome replication but can also be introduced chemically or by ionizing radiation. Therefore, it remains unsatisfied as long as the DNA damage kinases ATM and ATR are active. Finally, the SAC governs one of the most intricate processes of the cell cycle: the formation of the mitotic spindle including proper attachment of each and every chromosome to its microtubules. After a checkpoint becomes satisfied, one or more positive feedback loops spring into action and effectively jump re-start the cell cycle.

As one can imagine, all of these processes must be exquisitely controlled to ensure the mission’s overall success. In future posts, we will explore those mechanisms in more detail and will furthermore discuss, how a handful of biochemical fallacies can have the potential to turn this wonderful circle of life into a wicked cycle of death.

How a Cancer’s Genome Can Tell Us How to Treat it

By Gesa Junge, PhD

 

Any drug that is approved by the FDA has to have completed a series of clinical trials showing that the drug is safe to use and brings a therapeutic benefit, usually longer responses, better disease control, or fewer toxicities.

Generally, a phase I study of a potential cancer drug will include less than a hundred patients with advanced disease that have no more treatment options, and often includes many (or all) types of cancer. The focus in Phase I studies is on safety, and on finding the best dose of the drug to use in subsequent trials. Phase II studies involve larger patient groups (around 100 to 300) and the aim is to show that the treatment works and is safe in the target patient population, while Phase III trials can involve thousands of patients across several hospitals (or even countries) and aims to show a clinical benefit compared to existing therapies. Choosing the right patient population to test a drug in can make the difference between a successful and a failed drug. Traditionally, phase II and III trial populations are based on tumour site (e.g. lung or skin) and/or histology, i.e. the tissue where the cancer originates (e.g. carcinomas are cancer arising from epithelial tissues, while sarcomas develop in connective tissue).

However, as our understanding of cancer biology improves, it is becoming increasingly clear that the molecular basis of a tumour may be more relevant to therapy choice than where in the body it develops. For example, about half of all cutaneous melanoma cases (the most aggressive form of skin cancer) have a mutation in a signalling protein called B-Raf (BRAF V600). B-Raf is usually responsible for transmitting growth signals to cells, but while the normal, unmutated protein does this in a very controlled manner, the mutated version provides a constant growth signal, causing the cell to grow even when it shouldn’t, which leads to the formation of a tumour. A drug that specifically targets and inhibits the mutated version of B-Raf, Vemurafenib, was approved for the treatment of skin cancer in 2011, after trials showed it lead to longer survival and better response rates compared to the standard therapy at the time. It worked so well that patients in the comparator group were switched to the vemurafenib group halfway through the trial.

While B-Raf V600 mutations are especially common in skin cancer, they also occur in various other cancers, although at much lower percentages (often less than 5%), for example in lung and colorectal cancer. And since inhibition of B-Raf V600 works so well in B-Raf mutant skin cancer, should it not work just as well in lung or colon cancer with the same mutation? As the incidence of B-Raf V600 mutations is so low in most cancers, it would be difficult to find enough people to conduct a traditional trial and answer this question. However, a recently published study at Sloan Kettering Cancer Centre took a different approach: This study included 122 patients with non-melanoma cancers positive for B-Raf V600 and showed that lung cancer patients positive for B-Raf V600 mutations responded well to Vemurafenib, but colorectal cancer patients did not. This suggests that the importance of the mutated B-Raf protein for the survival of the tumour cells is not the same across cancer types, although at this point there is no explanation as to why.

Trials in which the patient population is chosen based on tumour genetics are called basket trials, and they are a great way to study the effect of a certain mutation on various different cancer types, even if only very few cases show this mutation. A major factor here is that DNA sequencing has come a long way and is now relatively cheap and quick to do. While the first genome that was sequenced as part of the Human Genome Project cost about $2.7bn and took over a decade to complete, a tumour genome can now be sequenced for around $1000 in a matter of days. This technological advance may make it possible to routinely determine a patient’s tumour’s DNA code and assign them to a therapy (or a study) based on this information.

The National Cancer Institute is currently running a trial which aims to evaluate this model of therapy. In the Molecular Analysis for Therapy Choice (MATCH) Trial, patients are assigned to a therapy based on their tumour genome. Initially, only ten treatments were included and the study is still ongoing, but an interim analysis after the 500th patient had been recruited in October 2015 showed that 9% of patients could be assigned to therapy based on mutations in their tumour, which is expected to increase as the trial is expanded to include more treatments.

This approach may be especially important for newer types of chemotherapy, which are targeted to a tumour-specific mutation that usually causes a healthy cell to become a cancer cell in the first place, as opposed to older generation chemotherapeutic drugs which target rapidly dividing cells and are a lot less selective. Targeted therapies may only work in a smaller number of patients, but are usually much better tolerated and often more effective, and molecular-based treatment decisions could be a way to allow more patients access to effective therapies faster.

AIDS Attack: Priming an Immune Response to Conquer HIV

By Esther Cooke, PhD

Infection with HIV remains a prominent pandemic. Last year, an estimated 36.7 million people worldwide were living with HIV, two million of which were newly infected. The HIV pandemic most stringently affects low- and middle-income countries, yet doctors in Saskatchewan, Canada are calling, in September 2016, for a state of emergency over rising HIV rates.

Since the mid-20th century, we have seen vaccination regimes harness the spread of gnarly diseases such as measles, polio, tetanus, and small pox, to name but a few. But why is there still no HIV vaccine?

When a pathogen invades a host, the immune system responds by producing antibodies that recognise and bind to a unique set of proteins on the pathogen’s surface, or “envelope”. In this way, the pathogen loses its function and is engulfed by defence cells known as macrophages. Memory B cells, a type of white blood cell, play a pivotal role in mounting a rapid attack upon re-exposure to the infectious agent. The entire process is known as adaptive immunity – a phenomenon which is exploited for vaccine development.

The cornerstone of adaptive immunity is specificity, which can also become its downfall in the face of individualistic intruders, such as HIV. HIV is an evasive target owing to its mutability and highly variable envelope patterns. Memory B cells fail to remember the distinctive, yet equally smug, faces of the HIV particles. This lack of recognition hampers a targeted attack, allowing HIV to nonchalantly dodge bullet after bullet, and maliciously nestle into its host.

For HIV and other diverse viruses, such as influenza, a successful vaccination strategy must elicit a broad immune response. This is no mean feat, but researchers at The Scripps Research Institute (TSRI), La Jolla and their collaborators are getting close.

The team have dubbed their approach to HIV vaccine design a “reductionist” strategy. Central to this strategy are broadly neutralizing antibodies (bnAb), which feature extensive mutations and can combat a wide range of virus strains and subtypes. These antibodies slowly emerge in a small proportion of HIV-infected individuals. The goal is to steer the immune system in a logical fashion, using sequential “booster” vaccinations to build a repertoire of effective bnAbs.

Having already mapped the best antibody mutations for binding to HIV, Professor Dennis Burton and colleagues at TSRI, as well as collaborators at the International AIDS Vaccine Initiative, set out to prime precursor B cells to produce the desired bnAbs. They did this using an immunogen – a foreign entity capable of inducing an immune response – that targets human germline B cells. The results were published September 8, 2016 in the journal Science.

“To evaluate complex immunogens and immunization strategies, we need iteration – that is, a good deal of trial and error. This is not possible in humans, it would take too long,” says Burton. “One answer is to use mice with human antibody systems.”

The immunogen, donated by Professor William Schief of TSRI, was previously tested in transgenic mice with an elevated frequency of bnAb precursor cells. Germline-targeting was easier than would be the case in humans. In their most recent study, the Burton lab experimented in mice with a genetically humanised immune system, developed by Kymab of Cambridge, UK. This proved hugely advantageous, enabling them to study the activation of human B cells in a more robust mouse model. Burton speaks of their success:

“It worked! We could show that the so-called germline-activating immunogen triggered the right sort of antibody response, even though the cells making that kind of response were rare in the mice.”

The precursor B cells represented less than one in 60 million of total B cells in the Kymab mice, yet almost one third of mice exposed to the immunogen produced the desired activation response. This indicates a remarkably high targeting efficiency, and provides incentive to evaluate the technique in humans. Importantly, even better immunisation outcomes are anticipated in humans due to a higher precursor cell frequency. Burton adds that clinical trials of precursor activation will most likely begin late next year. If successful, development of the so-called reductionist vaccination strategy could one day spell serious trouble for HIV, and other tricky targets alike.