Spooky Science

Sally Burn

Happy Halloween! It’s been a spooky old few months for science. Here is a selection of the creepiest papers to rise recently from the publishing crypt:

 

“Dracula’s children: Molecular evolution of vampire bat venom”…

… This is the fantastic title of a ghoulish offering from the Venom Evolution Lab at the University of Queensland. Vampire bat venom is secreted by the submaxillary gland of blood-sucking bats. It contains toxins which interfere with the prey’s normal hemostatic response to injury, resulting in prolonged bleeding from the attack wound and providing the bat with an epic meal. Researchers in Bryan Fry’s lab set out to characterize the components of vampire bat venom and understand their molecular evolutionary history. They isolated novel isoforms of known venom toxins and also detected previously unknown peptides, many of which showed homology to proteins involved in blood clotting and vasodilation. The researchers found that mutation of the venom proteins’ surface chemistry may facilitate evasion of the host’s immune response. They also uncovered new molecular information about the appropriately named Draculin, a major component of vampire bat venom. Fry’s lab isolated a large fragment of Draculin’s amino acid sequence and discovered that it produces a mutated form of the lactotransferrin scaffold. These findings shed light on the molecular evolution of vampire bat venom and may aid in the search for novel compounds to use in anti-venom drugs.

 

Prepare to be spooked

Which is the scariest family in the world? The Addams family? The Munsters? No, it’s the Halloween family of genes of course! Members of this family encode cytochrome P450 enzymes and perform roles in insect development and reproduction. The genes have eerie names like phantom, spook, spookier, disembodied, and shade. A recent paper in BMC Molecular Biology reports the cloning and functional characterization of spook in the planthopper insect Sogatella furcifera. The lab of Guo-Qing Li characterized the expression pattern of the gene and showed that loss of expression caused delayed development and lethality. They also found evidence for a role in ecdysteroid hormone synthesis in the hemiptera order of insects – a known function of spook in other insect orders.

 

Halloween II

It’s definitely that Halloween time of year. Guo-Qing Li’s group, clearly working at full speed in time for Halloween, have now uncovered a role for a second Halloween gene in planthopper ecdysteroid hormone synthesis: shade. Taking a similar approach to that in their first paper, they isolated the gene and characterized its expression pattern. Once more they found that expression loss resulted in nymphal lethality. This phenotype could be rescued by providing the insects with ecdysteroid hormone, supporting a role for shade in the hormone’s synthesis in hemiptera. So, kids, when you are out trick-or-treating tonight try to remember what you are really celebrating: the synthesis of insect sex hormones – hurrah!

Lab NIGHTMARES!

Sally Burn

Labs are scary places. We know this from the movies, which paint a pretty vivid picture of what can go down in a lab: the creation of a zombie-inducing rage virus, the rising up of a bioengineered beasty, the lone scientist working late into the night and meeting a grisly demise… Granted, some real-life scientists might argue that the scariest thing is how poorly movie labs are tethered in reality (hello favorite new Tumblr), but they are just being humorless curmudgeons because, as the movies tell us, that is generally what scientists are.

But the truth is labs ARE scary places. Think about it, you are surrounded by an arsenal of dangerous chemicals and equipment. You may be there alone late at night, accompanied only by the rhythmic tick-tick-tick of the incubator, the hum of the freezers, and… wait… what’s that? It’s getting closer… an eerie creaking… closer still… and then it’s usually just the night janitor. But the mind plays tricks when you’re under the stress of publish or perish. What if all your cell lines die? What if your data drive goes up in flames? These are real lab horrors.

So now, in the spirit of Halloween, we proudly present Scizzle’s compendium of Lab Nightmares. I polled leading scientists about their experiences (this data set is known as “Facebook friends”). All names have been changed to protect identity and/or save embarrassment. Also, readers who have recently eaten may wish to avoid the final story.

 

Losing your data or samples…

This is a lab nightmare that would reduce me to a cold sweat during the last months of my PhD. It’s easy to stop data loss – just back up. But what if the backup goes on fire (as happened last week to a friend in the final throes of her PhD)? Well, that’s fine because obviously you backed it up to a third drive which nestles cozily in your backpack at all times. But hang on. What if you get hit by a truck? Like a really massive truck, so big it vaporizes your hard disk on impact? Now a sane human would obviously point out that if the impact was that major you’re probably dead anyway. NON-SCIENTIST FOOL! This does not matter. Only data matters! I’m in the last month of my PhD! I’ve eaten only microwave meals for six weeks!! I know not what sleep is.

Luckily, our stressed out data worries usually turn out to be unfounded. But sometimes it does all go wrong. Pity the poor grad student who one day found Continue reading “Lab NIGHTMARES!”

I Can Read Your Mind!

Celine Cammarata

Can neuroscientists read your mind?

That depends on what exactly you consider to be mind reading.  Recent years have brought notable advances in methods and application of decoding human neural activity, as described recently in Nature.  This, in turn, has allowed researchers to make strong predictions about what a subject is seeing or thinking based on brain scans alone.

 

The concept is elegantly simple; by feeding a computer algorithm data on neural activity as well as information about what the subject is experiencing at the time, the computer can gradually learn how activity patterns correlate with the outside world.  After being sufficiently trained in this way, the algorithm can work in the other direction, using neural activity to predict what stimuli are being experienced.  In humans, most work has focused on vision and the primary means of surveying neural activity is fMRI.  Participants are shown pictures or video clips while the fMRI detects overall patterns of activity in the brain, and both streams of information are routed to a computer.  With time and training, algorithms can decode fMRI data to determine what a participant is viewing.  Researchers have used the same principle in attempts to decode dreams and intentions, although the difficulty of isolating and controlling such stimuli, as well as their subjectivity, make them more daunting to decode than vision.

 

The primary application of decoding techniques, though, is not to pry into our inner lives but rather to glean insight into how the brain codes information in the first place.  Simply trying to deduce the coding scheme through observing neural behavior is a staggering task, so instead many investigators work by comparing theoretical models to empirical evidence based on these decoding methods.  For instance, work in other animals suggested that humans might have regions of visual cortex that were partial to shapes with edges, but these had been hard to spot with traditional imaging techniques.  Decoding patterns of activity triggered by different visual stimuli revealed that edges do indeed have a distinct signature on the visual response.

 

It’s true that decoding neural activity could in theory also help reveal things ranging from consumers’ unspoken preferences to whether someone is lying, but employing the technique in these ways faces notable challenges.  Current methods rely on a training period to learn the associations between stimuli and neural activity for an individual before sound predictions can be made; getting around this would require a much greater ability to generalize across human brains than is currently possible.  Furthermore, the necessity of a large, noisy, and extremely expensive fMRI scanner makes decoding as its known today largely impractical for most commercial or law enforcement applications.

 

Nonetheless, the possibility remains for the future, and not surprisingly this possibility has raised numerous legal and ethical questions.  Some argue that one’s inner thoughts are too private to so probed; some believe that intentions, memories and the like are no different from other types of information; and many point out that we might not know how to correctly interpret the workings of someone’s mind even if we could decode it.

 

So, can neuroscientists read your mind.  For now, let’s just say, “not yet.”

 

Want to be the first to know when mind reading is for real? Create your feed, it’s free

A Peek Inside Mouse Development

Sally Burn

The humble laboratory mouse is one of the greatest tools researchers have to model human development and disease. A common approach is to create a transgenic model of a human disorder, often by “knocking out” a gene in mice and then examining the effects. Transgenic mouse models are of particular use for characterizing disorders that disrupt embryonic development. When a disease progresses through childhood or adult life we can gather information about its pathogenesis, even taking samples from the patient for research along the way. However, genetic diseases that disrupt embryonic development often result in death during gestation or at birth, limiting opportunities to observe how the disease manifested. By examining embryonic development in mouse models we can get an idea of the timeline of events involved.

Unfortunately mouse embryonic development can usually only be examined in a fairly spasmodic manner. The roughly twenty days of mouse gestation cannot be observed in a single fluid motion; instead researchers must euthanize the mothers and remove the embryos for examination at set points throughout gestation. The embryos cannot survive outside the mother and so all that can be achieved is a snapshot of that moment in development. Imagine that instead of watching a movie all you get is a series of film stills, which you must piece together to try to get the full story, potentially missing out on key plot twists. Now, in an effort to address this problem, researchers are turning to a non-invasive imaging technique used routinely in humans: high frequency ultrasound.

In utero ultrasounds were first reported in mice nearly twenty years ago but are still not that widely use

Division Doppelgangers

Alisa Moskaleva

 

Cyclin A is a confounded nuisance for cell biologists. Noticed serendipitously in 1982 in sea urchins and clams in an experiment that earned a share of the 2001 Nobel Prize in Physiology or Medicine, cyclin A and its doppelganger protein, cyclin B, help cells of all animals grow and divide properly. Cells stockpile both proteins before dividing, use them to control division, and then degrade them after they have served their purpose. If cells are deprived of cyclin A or cyclin B, they can’t divide. If cells have too much of these proteins they start dividing early and get stuck, unable to separate into two new cells. But whereas cyclin B sticks around until the step before the two new cells separate, when the two copies of the cell genome are all set to separate, cyclin A disappears several minutes earlier when those two copies of the genome are nowhere near ready to split. Why does a responsible regulator like cyclin A leave its post so scandalously early? And why does a cell need cyclin A to regulate division when it has cyclin B there willing and present?

Lilian Kabeche and Duane A. Compton begin to answer both of these questions in their October 3 Nature paper. They took a close, microscope-assisted look at what goes on during cell division. The general process of cell division has been known for over a hundred years. Before starting to divide, the cell replicates its contents, including its DNA, so it can pass on a copy to both cells of the new generation. Then, during the prometaphase stage, the cell packs up its DNA really tightly and simultaneously builds up lots of microtubules, which are long fibers of protein that act as miniature ropes and sprout from two opposite sides of the dividing cell. The microtubules attempt to lasso the DNA, so that half of the DNA is attached to microtubules from one end of the cell and the other half is attached to microtubules from the other end of the cell. At this time cyclin A disappears. Then, at a stage called metaphase when the DNA is all lined up in the middle of the cell and properly attached to microtubules, cyclin B disappears. What follows is separation of the two copies of DNA to the two sides of the cell, pulled by microtubules; this is called anaphase. Finally, in telophase the two cells pinch off from each other and resume growing.

Kabeche and Compton focused on how cyclin A may be regulating the way microtubules attach to DNA. The big blob of DNA inside a cell is quite easy to see under a microscope, but it’s much harder to see the thin individual microtubules. Thus, Kabeche and Compton labeled microtubules with a photoactivatable fluorescent protein, a protein that can be made to glow by shining a certain wavelength of light on it. Then they looked for microtubules that approached DNA, shone light on them to make them glow, and assessed whether the glowing microtubules would stay in place or wander off. They observed that in prometaphase microtubules were much more likely to wander off than in metaphase. This makes sense. In metaphase, the DNA is organized and aligned, so it should be easy for microtubules to grab it. In prometaphase, by contrast, the DNA is still unorganized and in the process of aligning, so mistakes in attaching microtubules are likely. Microtubules from both sides of the cell may grab the same copy of DNA. Or microtubules from only one side of the cell may grab both DNA copies. These attachment mistakes, if not corrected, would distribute DNA unevenly or even tear it up, leading to deleterious mutations. So, it’s good that microtubules in prometaphase do not attach stably. When Kabeche and Compton gave cells extra cyclin A, they saw that microtubules would wander much longer than normally even in cells that were in metaphase and had their DNA aligned properly. And when Kabeche and Compton deprived cells of cyclin A, they noticed that the DNA separated unevenly, suggesting that microtubules attached at the wrong place.

All of these observations suggest that cyclin A somehow makes microtubules restless, whereas cyclin B, still present when microtubules make stable attachments, does not. The cell uses cyclin A to control the attachment of microtubules to DNA, and then disposes of it, while relying on cyclin B to control the separation of DNA copies. Given its distinct function, cyclin A disappears not early, but at precisely the right time. If it were to stick around, microtubules would never attach to DNA and division would never proceed. On the other hand, if it were not present at all, microtubules would attach too early and in all the wrong places, leading to mistakes in partitioning the genome to the new generation. Of course, there are many vexing questions that remain to be answered, the most obvious of which is how does cyclin A cause microtubules to no longer attach to DNA? It looks like cyclin A has many more mysteries to reveal.

Want to stay on top of cyclin A and cyclin B and their affect on microtubules? Create your feed, it’s free

New Spin on Pancreatic Cancer Diagnostics

Neeley Remmers

Recently, 15-year old Jack Andraka made national headlines for the diagnostic assay he created to detect pancreatic cancer. Before I get into discussing the science behind his assay, let me first give Jack a major shout-out for having the courage to pursue his idea in the first place. He gave a presentation of his theory to 200 professors at Johns Hopkins University, a feat that would terrify most of us even after having graduated with our PhD let alone at the age of 15. He impressed Dr. Anirban Maitra, a world-renowned pathologist and scientist in pancreatic cancer, and gained an invitation into Dr. Maitra’s lab to work on his idea. Jack’s motivation to design a diagnostic assay for pancreatic cancer because his uncle died from the disease.

For those of you who don’t know the field, there currently is no effective way to diagnose pancreatic cancer, especially when it is in its earliest stages. There are a number of reasons for this – clinical symptoms do not present themselves until the disease has progressed to metastasis at which point makes it very hard to treat, most symptoms are typically any combination of abdominal pain, back pain,  jaundice and weight loss which can be the cause of any number of diseases, and due to the anatomical position of the pancreas it is very hard to image making it difficult to use standard imaging techniques to screen for early lesions like you can with breast cancer. As you can see, the need for a blood-based or other bodily-fluid based screening test is huge to adequately diagnose pancreatic cancer. Many scientists and physicians have been diligently working on this issue for a number of years, myself included, and have generated multiple platforms to use for early diagnosis including Dr. Brian Haab’s antibody-lectin arrays (the most recent publication being Cao Z et al. Mol Cell Proteomics 2013), Dr. Clausen and Dr. Blixt’s glycopeptide arrays to detect auto-antibodies (Pedersen JW et al. Int J Cancer  2011, Blixt O et al J Proteome Res 2010), and conjugating antibodies to Qdots, sphero beads, nanotubes and other fluorescent tags. The reasoning behind conjugating the antibody that recognizes your biomarker of choice is that fluorescence provides greater sensitivity than colorimetric assays that rely on the enzymatic cleavage of DAB or ABTS, for example. All these platforms are innovative and effective in their own right, but the real trick to designing an efficacious diagnostic assay is choosing the right biomarker to detect. This is where things get messy in the field of pancreatic cancer diagnostics.

Since very few patients are diagnosed in the early stages of pancreatic cancer, there is a very limited supply of samples from the early lesions, which are called PanINs. In fact, a large portion of the PanINs actually studied were pulled out of tumors from patients that had advanced disease, so you have to be really careful with conclusions drawn from these “PanINs” since we don’t know if they behave the same as true PanINs that arise prior to malignancy. Therefore, at this point in time, we are limited to studying the vast number of secreted proteins that can be found in blood, plasma, or urine of advanced pancreatic cancer patients. There are a limited number of studies (I know of 2 done in the UK) where blood samples were collected from patients over 20+ years in which a portion of these patients developed pancreatic cancer, and these samples are available for use in retrospective studies to see if the protein chosen to be used as a biomarker for pancreatic cancer is found in these patients early on when they would have presumably had early-stage disease. As you can see, there are a lot of challenges in this field that can make it rather daunting, and more often than not the biomarkers chosen to be used for diagnosis end up failing.

What did Jack Andraka do, other than being a 15-year old studying this challenging field, to make such a splash in the headlines? The platform Jack designed is unique in that he chose to attach his nanotube-conjugated antibodies to nothing other than filter paper making his platform the cheapest one yet. The biomarker he has chosen to detect in the bodily-fluids from patients is mesothelin. Mesothelin has been investigated as a potential biomarker of pancreatic cancer since at least 2004, but has yet to be shown to truly be better than all the other potential pancreatic cancer biomarkers. Jack’s test is still in preliminary stages so we have yet to see how efficacious it really is at diagnosing pancreatic cancer early on. Additionally, utilizing nanotubes conjugated to antibodies is not new either. In fact, a student in one of our collaborating labs at my university was doing the exact same thing 5 years ago; however, because we did not have an instrument on campus that would allow him to detect the nanotubes, he had to drop the project. What does make Jack’s platform unique is he linked his nanotubes to nothing more than filter paper instead of a more expensive spotted plate making his assay the cheapest one yet. The biggest flaw in Jack’s design is that it relies entirely on the detection of a single biomarker. It is pretty well-known and accepted in the field that in order to have a highly specific, effective diagnostic test you need to assay for a panel of established biomarkers that can distinguish pancreatic cancer from other benign and malignant diseases. Thus, he may need to go back to the drawing board to determine which panel of biomarkers perform the best in detecting pancreatic cancer. However, I do think his platform shows promise and will soon turn into a clinically useful early diagnostic assay.

 

Want to stay on top of pancreatic cancer and mesothelin? Create your feed, it’s free

Cleaning Your Body When You Sleep

Celine Cammarata

Sleep is a great mystery for scientists.  Nearly all living things do it, and sleep deprivation quickly leads to cognitive deficits, health problems, and death, so we can safely assume that sleep is important.  But for what exactly, no one is sure.  This week, a new paper in Science has made a splash by showing compelling evidence that sleep plays a key role in washing waste products from the brain, leaving it clean and refreshed for a new day of use.

 

Waste products are a natural part of life; all cellular processes produce waste, and being particularly busy cells, neurons tend to churn out a lot.  But unlike most parts of the body, where the lymphatic system takes care of clearing metabolic waste, in the brain proteins are washed out from the space surrounding cells via the exchange of clean cerebrospinal fluid (CSF) from the ventricular system in and around the brain with interstitial fluid containing waste products.

 

In the current study, the authors examined how readily labelled CSF traveled around the brains of mice in various states, and found that CSF influx to the brain was about 95% lower when animals were awake than during sleep.  Similar comparatively high CSF flow was seen when mice were anesthetized.

 

The investigators hypothesized that the observed difference in CSF flow may be due to differences in the interstitial space when animals were asleep or awake; reduced space between cells in awake mice could impede the movement of CSF.  When they tested this, they found that indeed, interstitial space was significantly greater during sleep or anesthesia, making an easier route for CSF.

 

Better flow of CSF means solutes are more easily flushed out of the area surrounding cells.  The authors demonstrated that β-amyloid, a major waste product in the brain, was cleared much more efficiently in sleeping and anesthetized mice than in awake animals, as was an inert test tracer, C-inulin.

 

The finding that anesthesia acts similarly to natural sleep suggests that it is the animal’s state, rather than circadian rhythms, that dictates the solute clearing properties of sleep, possibly via changes in cell volume that would in turn effect interstitial space.  Because they’re know to be important in arousal, adrenergic neurotransmitters are a good candidate to signal such changes.  The authors found that, consistent with this idea, inhibiting adrenergic neurotransmitters in awake animals improved CSF flow.

 

These findings suggest that a key role of sleep, and the reason sleep is so critical to brain function, may have to do with clearing waste products and restoring the brain the a healthy, clean state for the next days’ use.

Dry Science: The Good, The Bad, and The Possibilities

Celine Cammarata

Recent years have seen a boom in so-called “dry lab” research, centered around mining large data sets to draw new conclusions, Robert Service reports in Science.  The movement is fueled in part by the increased openness of information; while research consortia used to hold rights to many such data banks, new efforts to make them freely available have unleashed a wealth of material for “dry” scientists.  So what are some of the pros and cons of this growing branch of research?

 

Computer-based research of large, publicly available data can be a powerful source of information, leading to new insights on disease, drug treatments, plant genetics, and more.  One of the most commonly encountered methods is the Gene Wide Association Study, or GWAS, whereby researchers look for genetic traces of disease.  Such research is strengthened by the ability to collect huge amounts of data, increasing n values without having to recruit new participants.  Another perk of dry research is the increased mobility for researchers to hop among different areas of study; with no investment in maintaining animals or lab equipment specialized to any single line of investigation, researchers can study cancer genetics one year and bowel syndromes the next with little difficulty.

 

But getting the large amounts of data that fuel dry research can be more complicated than it seems.  Some investigators are reluctant to make their hard-earned numbers publicly available; others lack the time and manpower to do so.  And slight variations in how initial studies are conducted can make it challenging to pool data from different sources.  Furthermore, GWAS and similar experiments themselves are deceptively complicated.  Most diseases involve complicated combinations of genes turned on and off, making it hard to uncover genetic fingerprints of illness, and comparing the genomes of many subjects frequently leads to false signals.  For dry research to continue growing successfully, significant advances in programming and in mathematical techniques to analyze data will be required.  Finally, making data freely open for investigators to delve into raises concerns about subject confidentiality.

 

Finally, the increase in data availability raises intriguing questions about the future of research.  Currently, dry research requires complex programs and hefty computer power, but with computer science getting ever better, will future generations need a lab to do science?  Will anyone with a decent computer and some scientific know-how be able to contribute meaningfully to the research community?  And if so, what will this mean for the traditional university-based research world?  Only time will tell.

Bees, Teas and Rational Design

Chris Spencer

BEES. You will have read me waxing lyrical about bees before, and I’m going to do it again. We all know the conventional benefits of bees (honey, pollination) but, I will admit they come with one certain sharp and aggressive downside – the stinger.

However, I’m here to make the case that the stinger isn’t all bad. Okay, the stinger itself is pretty bad, particularly if you are allergic, but the venom itself seems to have a few properties that we can take advantage of. The major active component of apitoxin (bee venom) is melittin, a 26 amino acid peptide. This peptide has been implicated as a potent antimicrobial – it has been shown to inhibit the Lyme disease pathogen Borrelia and also the opportunistic fungal pathogen Candida. In February this year, nanoparticles containing melittin were shown to destroy the HIV envelope, killing the virus.

A paper published in July this year by Shin et al gives a perspective on yet another property of melittin; that it is anti-angiogenic.

Angiogenesis is the process whereby new blood vessels grow from existing vessels. It’s a normal physiological process, and one which is very important. Certain cancers arise because of the malignment of this (and other) processes. One pathway in angiogenesis is the regulation of vascular endothelial growth factor (VEGF) – a signal protein which stimulates the growth of blood vessels. VEGF is itself regulated by HIF-1 – a transcription factor. In cancers, the improper regulation of VEGF is often what causes cancers to metastasize. If therefore, there were an inhibitor that we could use to neutralize VEGF, then there would be an opportunity to curb the invasiveness of cancer in patients, before it has a chance to develop.

Cue melittin. Shin et al. have shown that melittin acts to inhibit HIF-1, and furthermore, melittin was shown to have a potent anti-angiogenic effect in tumours.

What can be learned from this then? I’m not suggesting for one second that the more research that we do, the more we’ll come to realize that melittin is actually some sort of panacea that will cure everything forever. But one can certainly say that diseases that are currently seen as incurable will one day be very treatable using compounds that we have not yet discovered. One such example is a study by Somsak et al. that suggests that green tea helps with a malaria infection (one major cause of morbidity in malaria infection is renal failure, however green tea extract provides some protection against kidney damage).

Perhaps soon enough, the days of having to screen compound after compound to see if they have an inhibitory effect on a disease effector will be behind us. With the field of computer aided ligand design coming on in leaps and bounds in recent years, it might not be foolish to hope that, with knowledge of a disease pathway, it might be possible to simply design and manufacture a specific inhibitor that can be used to stop the progression of an illness. It’s a future which sounds incredibly appealing, and maybe it’s not too far off.

But then no one would be singing the praises of bee stings.

Want to keep up with melittin? Create your feed for bee sting today!

Leafing through the Literature

Thalyana Smith-Vikos

Highlighting recently published articles in molecular biology, genetics, and other hot topics

Can I get some of your gut bacteria?

While there have been many reports popping up in the literature that demonstrate a connection between gut microbiome and diet, Ridaura et al. have elegantly showed how the mammalian microbiome affects diet in a specific yet alterable manner that can be transmitted across individuals. The researchers transplanted fecal microbiota from adult murine female twins (one obsess, one lean) into mice fed diets of varying levels of saturated fats, fruits and vegetables. Body and fat mass did depend on fecal bacterial composition. Strikingly, mice that had been given the obese twin’s microbiota did not develop an increase in body mass or obesity-related phenotypes when situated next to mice that had been given the lean twin’s microbiota. The researchers saw that, for certain diets, there was a transmission of specific bacteria from the lean mouse to the obese mouse’s microbiota.

Want to keep up with gut microbiota? Create your feed!

 

In vivo reprogramming

Abad et al. have performed reprogramming of adult cells into induced pluripotent stem cells (iPSCs) in vivo. By activating the transcription factor cocktail of Oct4, Sox2, Klf4 and c-Myc in mice, the researchers observed teratomas forming in multiple organs, and the pluripotency marker NANOG was expressed in the stomach, intestine, pancreas and kidney. Hematopoietic cells were also de-differentiated via bone marrow transplantation. Additionally, the iPSCs generated in vivo were more similar to embryonic stem cells than in vitro iPSCs by comparing transcriptomes. The authors also report that in vivo iPSCs display totipotency features.

Want to keep up with Abad et al.? Create your feed!

 

Connection between pluripotency and embryonic development

Lee and colleagues have discovered that some of the same pluripotency factors (Nanog, Oct4/Pou5f1 and SoxB1) are also required for the transition from maternal to zygotic gene activation in early development. Using zebrafish as a model, the authors identified several hundred genes that are activated during this transition period, which is required for gastrulation and removal of maternal mRNAs in the zebrafish embryo. In fact, nanogsox19b and pou5f1 were the top translated transcription factors prior to this transition, and a triple knockdown prevented embryonic development, as well as the activation of many zygotic genes. One of the genes that failed to activate was miR-430, which the authors have previously shown is required for the maternal to zygotic transition. Thus, Nanog, Oct4 and SoxB1 induce the maternal to zygotic transition by activating miR-430.

 

A microRNA promotes sugar stability

Pederson and colleagues report that a C. elegans microRNA, miR-79, targets two factors critical for proteoglycan biosynthesis, namely a chondroitin synthesis and a uridine 5′-diphosphate-sugar transporter. Loss-of-function mir-79 mutants display neurodevelopmental abnormalities due to altered expression of these biosynthesis factors. The researchers discovered that this dysregulation of the two miR-79 targets leads to a disruption of neuronal migration through the glypican pathway, identifying the crucial impact of this conserved microRNA on proteoglycan homeostasis.

Struggling to keep up with all the mIRs? Create your feed for miR-430 or miR-79.

 

Establishing heterochromatin in Drosophila

It is known that RNAi and heterochromatin factor HP1 are required for organizing heterochromatin structures and silencing transposons in S. pombe. Gu and Elgin built on this information by studying loss of function mutants and shRNA lines of genes of interest in an animal model, Drosophila, during early and late development. The Piwi protein (involved in piRNA function) appeared to only be required in early embryonic stages for silencing chromatin in somatic cells.  Loss of Piwi leads to decreased HP1a, and the authors concluded that Piwi targets HP1a when heterochromatin structures are first established, but this targeting does not continue in later cell divisions. However, HP1a was required for primary assembly of heterochromatin structures and maintenance during subsequent cell divisions.

 

The glutamate receptor has a role in Alzheimer’s

Um and colleagues conducted a screen of transmembrane postsynaptic density proteins that might be able to couple amyloid-β oligomers (Aβo) bound by cellular prion protein (PrPC) with Fyn kinase, which disrupts synapses and triggers Alzheimer’s when activated by Aβo-PrPC . The researchers found that only the metabotropic glutamate receptor, mGluR5, allowed Aβo-PrPC  to activate intracellular Fyn. They further showed a physical interaction between PrPC and mGluR5, and that Fyn is found in complex with mGluR5. In Xenopus oocytes and neurons, Aβo-PrPC caused an increase in intracellular calcium dependent on mGluR5. Further, the Aβo-PrPC-mGluR5 complex resulted in dendritic spine loss. As a possible therapeutic, an mGluR5 antagonist given to a mouse model of inherited Alzheimer’s reversed the loss in synapse density and recovered learning and memory loss.

 

Keep playing those video games!

Anguera et al. investigated whether multitasking abilities can be improved in aging individuals, as these skills have become increasingly necessary in today’s world. The scientists developed a video game called NeuroRacer to test multitasking performance on individuals aged 20 to 79, and they observed that there is an initial decline in this ability with age. However, by playing a version of NeuroRacer in a multitasking training mode, individuals aged 60-85 achieved levels higher than that of 20-year-olds who had not used the training mode, and these successes persisted over the course of 6 months. This training in older adults improved cognitive control, attention and memory, and the enhancement in multitasking was still apparent 6 months later. The results from playing this video game indicate that the cognitive control system in the brains of aging individuals can be improved with simple training.

Want to stay in the game? Create your feeds and stay current with what’s sizzling.