A radiation-free approach to imaging molecules in the brain

Scientists hoping to get a glimpse of molecules that control brain activity have devised a new probe that allows them to image these molecules without using any chemical or radioactive labels.

Currently the gold standard approach to imaging molecules in the brain is to tag them with radioactive probes. However, these probes offer low resolution and they can’t easily be used to watch dynamic events, says Alan Jasanoff, an MIT professor of biological engineering.

Jasanoff and his colleagues have developed new sensors consisting of proteins designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” Jasanoff says. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

In a paper appearing in the Dec. 2 issue of Nature Communications, Jasanoff and his colleagues used these probes to detect enzymes called proteases, but their ultimate goal is to use them to monitor the activity of neurotransmitters, which act as chemical messengers between brain cells.

The paper’s lead authors are postdoc Mitul Desai and former MIT graduate student Adrian Slusarczyk. Recent MIT graduate Ashley Chapin and postdoc Mariya Barch are also authors of the paper.

Indirect imaging

To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

“These are molecules that aren’t visualized directly, but instead produce changes in the body that can then be visualized very effectively by imaging,” Jasanoff says.

Proteases are sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease. However, Jasanoff’s lab used them in this study mainly to demonstrate the validity their approach. Now, they are working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, that are critical to cognition and processing emotions.

To do that, the researchers plan to modify the cages surrounding the CGRP so that they can be removed by interaction with a particular neurotransmitter.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

Tracking genes

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.

Researchers create synthetic cells to isolate genetic circuits

Synthetic biology allows scientists to design genetic circuits that can be placed in cells, giving them new functions such as producing drugs or other useful molecules. However, as these circuits become more complex, the genetic components can interfere with each other, making it difficult to achieve more complicated functions.

MIT researchers have now demonstrated that these circuits can be isolated within individual synthetic “cells,” preventing them from disrupting each other. The researchers can also control communication between these cells, allowing for circuits or their products to be combined at specific times.

“It’s a way of having the power of multicomponent genetic cascades, along with the ability to build walls between them so they won’t have cross-talk. They won’t interfere with each other in the way they would if they were all put into a single cell or into a beaker,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. Boyden is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

This approach could allow researchers to design circuits that manufacture complex products or act as sensors that respond to changes in their environment, among other applications.

Boyden is the senior author of a paper describing this technique in the Nov. 14 issue of Nature Chemistry. The paper’s lead authors are former MIT postdoc Kate Adamala, who is now an assistant professor at the University of Minnesota, and former MIT grad student Daniel Martin-Alarcon. Katriona Guthrie-Honea, a former MIT research assistant, is also an author of the paper.

Circuit control

The MIT team encapsulated their genetic circuits in droplets known as liposomes, which have a fatty membrane similar to cell membranes. These synthetic cells are not alive but are equipped with much of the cellular machinery necessary to read DNA and manufacture proteins.

By segregating circuits within their own liposomes, the researchers are able to create separate circuit subroutines that could not run in the same container at the same time, but can run in parallel to each other, communicating in controlled ways. This approach also allows scientists to repurpose the same genetic tools, including genes and transcription factors (proteins that turn genes on or off), to do different tasks within a network.

“If you separate circuits into two different liposomes, you could have one tool doing one job in one liposome, and the same tool doing a different job in the other liposome,” Martin-Alarcon says. “It expands the number of things that you can do with the same building blocks.”

This approach also enables communication between circuits from different types of organisms, such as bacteria and mammals.

As a demonstration, the researchers created a circuit that uses bacterial genetic parts to respond to a molecule known as theophylline, a drug similar to caffeine. When this molecule is present, it triggers another molecule known as doxycycline to leave the liposome and enter another set of liposomes containing a mammalian genetic circuit. In those liposomes, doxycycline activates a genetic cascade that produces luciferase, a protein that generates light.

Using a modified version of this approach, scientists could create circuits that work together to produce biological therapeutics such as antibodies, after sensing a particular molecule emitted by a brain cell or other cell.

“If you think of the bacterial circuit as encoding a computer program, and the mammalian circuit is encoding the factory, you could combine the computer code of the bacterial circuit and the factory of the mammalian circuit into a unique hybrid system,” Boyden says.

The researchers also designed liposomes that can fuse with each other in a controlled way. To do that, they programmed the cells with proteins called SNAREs, which insert themselves into the cell membrane. There, they bind to corresponding SNAREs found on surfaces of other liposomes, causing the synthetic cells to fuse. The timing of this fusion can be controlled to bring together liposomes that produce different molecules. When the cells fuse, these molecules are combined to generate a final product.

More modularity

The researchers believe this approach could be used for nearly any application that synthetic biologists are already working on. It could also allow scientists to pursue potentially useful applications that have been tried before but abandoned because the genetic circuits interfered with each other too much.

“The way that we wrote this paper was not oriented toward just one application,” Boyden says. “The basic question is: Can you make these circuits more modular? If you have everything mishmashed together in the cell, but you find out that the circuits are incompatible or toxic, then putting walls between those reactions and giving them the ability to communicate with each other could be very useful.”

Vincent Noireaux, an associate professor of physics at the University of Minnesota, described the MIT approach as “a rather novel method to learn how biological systems work.”

“Using cell-free expression has several advantages: Technically the work is reduced to cloning (nowadays fast and easy), we can link information processing to biological function like living cells do, and we work in isolation with no other gene expression occurring in the background,” says Noireaux, who was not involved in the research.

Another possible application for this approach is to help scientists explore how the earliest cells may have evolved billions of years ago. By engineering simple circuits into liposomes, researchers could study how cells might have evolved the ability to sense their environment, respond to stimuli, and reproduce.

“This system can be used to model the behavior and properties of the earliest organisms on Earth, as well as help establish the physical boundaries of Earth-type life for the search of life elsewhere in the solar system and beyond,” Adamala says.

A new player in appetite control

MIT neuroscientists have discovered that brain cells called glial cells play a critical role in controlling appetite and feeding behavior. In a study of mice, the researchers found that activating these cells stimulates overeating, and that when the cells are suppressed, appetite is also suppressed.

The findings could offer scientists a new target for developing drugs against obesity and other appetite-related disorders, the researchers say. The study is also the latest in recent years to implicate glial cells in important brain functions. Until about 10 years ago, glial cells were believed to play more of a supporting role for neurons.

“In the last few years, abnormal glial cell activities have been strongly implicated in neurodegenerative disorders. There is more and more evidence to point to the importance of glial cells in modulating neuronal function and in mediating brain disorders,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience. Feng is also a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng is one of the senior authors of the study, which appears in the Oct. 18 edition of the journal eLife. The other senior author is Weiping Han, head of the Laboratory of Metabolic Medicine at the Singapore Bioimaging Consortium in Singapore. Naiyan Chen, a postdoc at the Singapore Bioimaging Consortium and the McGovern Institute, is the lead author.

Turning on appetite

It has long been known that the hypothalamus, an almond-sized structure located deep within the brain, controls appetite as well as energy expenditure, body temperature, and circadian rhythms including sleep cycles. While performing studies on glial cells in other parts of the brain, Chen noticed that the hypothalamus also appeared to have a lot of glial cell activity.

“I was very curious at that point what glial cells would be doing in the hypothalamus, since glial cells have been shown in other brain areas to have an influence on regulation of neuronal function,” she says.

Within the hypothalamus, scientists have identified two key groups of neurons that regulate appetite, known as AgRP neurons and POMC neurons. AgRP neurons stimulate feeding, while POMC neurons suppress appetite.

Until recently it has been difficult to study the role of glial cells in controlling appetite or any other brain function, because scientists haven’t developed many techniques for silencing or stimulating these cells, as they have for neurons. Glial cells, which make up about half of the cells in the brain, have many supporting roles, including cushioning neurons and helping them form connections with one another.

In this study, the research team used a new technique developed at the University of North Carolina to study a type of glial cell known as an astrocyte. Using this strategy, researchers can engineer specific cells to produce a surface receptor that binds to a chemical compound known as CNO, a derivative of clozapine. Then, when CNO is given, it activates the glial cells.

The MIT team found that turning on astrocyte activity with just a single dose of CNO had a significant effect on feeding behavior.

“When we gave the compound that specifically activated the receptors, we saw a robust increase in feeding,” Chen says. “Mice are not known to eat very much in the daytime, but when we gave drugs to these animals that express a particular receptor, they were eating a lot.”

The researchers also found that in the short term (three days), the mice did not gain extra weight, even though they were eating more.

“This raises the possibility that glial cells may also be modulating neurons that control energy expenditures, to compensate for the increased food intake,” Chen says. “They might have multiple neuronal partners and modulate multiple energy homeostasis functions all at the same time.”

When the researchers silenced activity in the astrocytes, they found that the mice ate less than normal.

Suzanne Dickson, a professor of neuroendocrinology at the University of Gothenburg in Sweden described the study as part of a “paradigm shift” toward the idea that glial cells have a less passive role than previously believed.

“We tend to think of glial cells as providing a support network for neuronal processes and that their activation is also important in certain forms of brain trauma or inflammation,” says Dickson, who was not involved in the research. “This study adds to the emerging evidence base that glial cells may also exert specific effects to control nerve cell function in normal physiology.”

Unknown interactions

Still unknown is how the astrocytes exert their effects on neurons. Some recent studies have suggested that glial cells can secrete chemical messengers such as glutamate and ATP; if so, these “gliotransmitters” could influence neuron activity.

Another hypothesis is that instead of secreting chemicals, astrocytes exert their effects by controlling the uptake of neurotransmitters from the space surrounding neurons, thereby affecting neuron activity indirectly.

Feng now plans to develop new research tools that could help scientists learn more about astrocyte-neuron interactions and how astrocytes contribute to modulation of appetite and feeding. He also hopes to learn more about whether there are different types of astrocytes that may contribute differently to feeding behavior, especially abnormal behavior.

“We really know very little about how astrocytes contribute to the modulation of appetite, eating, and metabolism,” he says. “In the future, dissecting out these functional difference will be critical for our understanding of these disorders.”

Finding a way in

Our perception of the world arises within the brain, based on sensory information that is sometimes ambiguous, allowing more than one interpretation. Familiar demonstrations of this point include the famous Necker cube and the “duck-rabbit” drawing (right) in which two different interpretations flip back and forth over time.

Another example is binocular rivalry, in which the two eyes are presented with different images that are perceived in alternation. Several years ago, this phenomenon caught the eye of Caroline Robertson, who is now a Harvard Fellow working in the lab of McGovern Investigator Nancy Kanwisher. Back when she was a graduate student at Cambridge University, Robertson realized that binocular rivalry might be used to probe the basis of autism, among the most mysterious of all brain disorders.

Robertson’s idea was based on the hypothesis that autism involves an imbalance between excitation and inhibition within the brain. Although widely supported by indirect evidence, this has been very difficult to test directly in human patients. Robertson realized that binocular rivalry might provide a way to perform such a test. The perceptual switches that occur during rivalry are thought to involve competition between different groups of neurons in the visual cortex, each group reinforcing its own interpretation via excitatory connections while suppressing the alternative interpretation through inhibitory connections. Thus, if the balance is altered in the brains of people with autism, the frequency of switching might also be different, providing a simple and easily measurable marker of the disease state.

To test this idea, Robertson recruited adults with and without autism, and presented them with two distinct and differently colored images in each eye. As expected, their perceptions switched back and forth between the two images, with short periods of mixed perception in between. This was true for both groups, but when she measured the timing of these switches, Robertson found that individuals with autism do indeed see the world in a measurably different way than people without the disorder. Individuals with autism cycle between the left and right images more slowly, with the intervening periods of mixed perception lasting longer than in people without autism. The more severe their autistic symptoms, as determined by a standard clinical behavioral evaluation, the greater the difference.

Robertson had found a marker for autism that is more objective than current methods that involve one person assessing the behavior of another. The measure is immediate and relies on brain activity that happens automatically, without people thinking about it. “Sensation is a very simple place to probe,” she says.

A top-down approach

When she arrived in Kanwisher’s lab, Robertson wanted to use brain imaging to probe the basis for the perceptual phenomenon that she had discovered. With Kanwisher’s encouragement, she began by repeating the behavioral experiment with a new group of subjects, to check that her previous results were not a fluke. Having confirmed that the finding was real, she then scanned the subjects using an imaging method called Magnetic Resonance Spectroscopy (MRS), in which an MRI scanner is reprogrammed to measure concentrations of neurotransmitters and other chemicals in the brain. Kanwisher had never used MRS before, but when Robertson proposed the experiment, she was happy to try it. “Nancy’s the kind of mentor who could support the idea of using a new technique and guide me to approach it rigorously,” says Robertson.

For each of her subjects, Robertson scanned their brains to measure the amounts of two key neurotransmitters, glutamate, which is the main excitatory transmitter in the brain, and GABA, which is the main source of inhibition. When she compared the brain chemistry to the behavioral results in the binocular rivalry task, she saw something intriguing and unexpected. In people without autism, the amount of GABA in the visual cortex was correlated with the strength of the suppression, consistent with the idea that GABA enables signals from one eye to inhibit those from the other eye. But surprisingly, there was no such correlation in the autistic individuals—suggesting that GABA was somehow unable to exert its normal suppressive effect. It isn’t yet clear exactly what is going wrong in the brains of these subjects, but it’s an early flag, says Robertson. “The next step is figuring out which part of the pathway is disrupted.”

A bottom-up approach

Robertson’s approach starts from the top-down, working backward from a measurable behavior to look for brain differences, but it isn’t the only way in. Another approach is to start with genes that are linked to autism in humans, and to understand how they affect neurons and brain circuits. This is the bottom-up approach of McGovern Investigator Guoping Feng, who studies a gene called Shank3 that codes for a protein that helps build synapses, the connections through which neurons send signals to each other. Several years ago Feng knocked out Shank3 in mice, and found that the mice exhibited behaviors reminiscent of human autism, including repetitive grooming, anxiety, and impaired social interaction and motor control.

These earlier studies involved a variety of different mutations that disabled the Shank3 gene. But when postdoc Yang Zhou joined Feng’s lab, he brought a new perspective. Zhou had come from a medical background and wanted to do an experiment more directly connected to human disease. So he suggested making a mouse version of a Shank3 mutation seen in human patients, and testing its effects.

Zhou’s experiment would require precise editing of the mouse Shank3 gene, previously a difficult and time-consuming task. But help was at hand, in the form of a collaboration with McGovern Investigator Feng Zhang, a pioneer in the development of genome-editing methods.

Using Zhang’s techniques, Zhou was able to generate mice with two different mutations: one that had been linked to human autism, and another that had been discovered in a few patients with schizophrenia.

The researchers found that mice with the autism-related mutation exhibited behavioral changes at a young age that paralleled behaviors seen in children with autism. They also found early changes in synapses within a brain region called the striatum. In contrast, mice with the schizophrenia-related gene appeared normal until adolescence, and then began to exhibit changes in behavior and also changes in the prefrontal cortex, a brain region that is implicated in human schizophrenia. “The consequences of the two different Shank3 mutations were quite different in certain aspects, which was very surprising to us,” says Zhou.

The fact that different mutations in just one gene can produce such different results illustrates exactly how complex these neuropsychiatric disorders can be. “Not only do we need to study different genes, but we also have to understand different mutations and which brain regions have what defects,” says Feng, who received funding from the Poitras Center for Affective Disorders research and the Simons Center for the Social Brain. Robertson and Kanwisher were also supported by the Simons Center.

Surprising plasticity

The brain alterations that lead to autism are thought to arise early in development, long before the condition is diagnosed, raising concerns that it may be difficult to reverse the effects once the damage is done. With the Shank3 knockout mice, Feng and his team were able to approach this question in a new way, asking what would happen if the missing gene were to be restored in adulthood.

To find the answer, lab members Yuan Mei and Patricia Monteiro, along with Zhou, studied another strain of mice, in which the Shank3 gene was switched off but could be reactivated at any time by adding a drug to their diet. When adult mice were tested six weeks after the gene was switched back on, they no longer showed repetitive grooming behaviors, and they also showed normal levels of social interaction with other mice, despite having grown up without a functioning Shank3 gene. Examination of their brains confirmed that many of the synaptic alterations were also rescued when the gene was restored.

Not every symptom was reversed by this treatment; even after six weeks or more of restored Shank3 expression, the mice continued to show heightened anxiety and impaired motor control. But even these deficits could be prevented if the Shank3 gene was restored earlier in life, soon after birth.

The results are encouraging because they indicate a surprising degree of brain plasticity, persisting into adulthood. If the results can be extrapolated to human patients, they suggest that even in adulthood, autism may be at least partially reversible if the right treatment can be found. “This shows us the possibility,” says Zhou. “If we could somehow put back the gene in patients who are missing it, it could help improve their life quality.”

Converging paths

Robertson and Feng are approaching the challenge of autism from different starting points, but already there are signs of convergence. Feng is finding early signs that his Shank3 mutant mice may have an altered balance of inhibitory and excitatory circuits, consistent with what Robertson and Kanwisher have found in humans.

Feng is continuing to study these mice, and he also hopes to study the effects of a similar mutation in non-human primates, whose brains and behaviors are more similar to those of humans than rodents. Robertson, meanwhile, is planning to establish a version of the binocular rivalry test in animal models, where it is possible to alter the balance between inhibition and excitation experimentally (for example, via a genetic mutation or a drug treatment). If this leads to changes in binocular rivalry, it would strongly support the link to the perceptual changes seen in humans.

One challenge, says Robertson, will be to develop new methods to measure the perceptions of mice and other animals. “The mice can’t tell us what they are seeing,” she says. “But it would also be useful in humans, because it would allow us to study young children and patients who are non-verbal.”

A multi-pronged approach

The imbalance hypothesis is a promising lead, but no single explanation is likely to encompass all of autism, according to McGovern director Bob Desimone. “Autism is a notoriously heterogeneous condition,” he explains. “We need to try multiple approaches in order to maximize the chance of success.”

McGovern researchers are doing exactly that, with projects underway that range from scanning children to developing new molecular and microscopic methods for examining brain changes in animal disease models. Although genetic studies provide some of the strongest clues, Desimone notes that there is also evidence for environmental contributions to autism and other brain disorders. “One that’s especially interesting to us is a maternal infection and inflammation, which in mice at least can affect brain development in ways we’re only beginning to understand.”

The ultimate goal, says Desimone, is to connect the dots and to understand how these diverse human risk factors affect brain function. “Ultimately, we want to know what these different pathways have in common,” he says. “Then we can come up with rational strategies for the development of new treatments.”

Divide and conquer

Cell populations are remarkably diverse—even within the same tissue or cell type. Each cell, no matter how similar it appears to its neighbor, behaves and responds to its environment in its own way depending on which of its genes are expressed and to what degree. How genes are expressed in each cell—how RNA is “read” and turned into proteins—determines what jobs the cell performs in the body.

Traditionally, researchers have taken an en masse approach to studying gene expression, extracting an averaged measurement derived from an entire cell population. But over the past few years, single cell sequencing has emerged as a transformative tool, enabling scientists to look at gene expression within cells at an unprecedented resolution. With single-cell technologies, researchers have been able to examine the heterogeneity within cell populations; identify rare cells; observe interactions between diverse cell types; and better understand how these interactions influence health and disease.

This week in Science, researchers from the labs of Broad core institute members Aviv Regev and Feng Zhang, of MIT and MIT’s McGovern Institute respectively, report on their newest contribution to this field: Div-Seq, a method that enables the study of previously intractable and rare cell types in the brain. The study’s first authors, Naomi Habib, a postdoctoral fellow in the Regev and Zhang labs, and Yinqing Li, also a postdoc in the Zhang lab, sat down to answer questions about this groundbreaking approach.

Why is it so important to study neurons at the single cell level?

Li: Neuropsychiatric diseases are often too complex to find an effective treatment, partly because the neurons, that underly the disease are heterogeneous. Only when we have a full atlas of every neuron type at single-cell resolution—and figure out which ones are the cause of the pathology—can we develop a targeted and effective therapy. With this goal in mind, we developed sNuc-Seq and Div-Seq to make it technologically possible to profile neurons from the adult brain at significantly improved resolution, fidelity, and sensitivity.

Scientifically, what was the need that you were trying to address when you started this study?

Habib: Going into this study we were specifically interested in studying so-called “newborn” neurons, which are rare and hard to find. We think of our brain as being non-regenerative, but in fact there are rare, neuronal stem cells in specific areas of the brain that divide and create new neurons throughout our lives. We wanted to understand how gene expression changed as these cells developed. Typically when people studied gene expression in the brain they just mashed up tissue and took average measurements from that mixture. Such “bulk” measurements are hard to interpret and we lose the gene expression signals that come from individual cell types.

When I joined the Zhang and Regev labs, some of the first single cell papers were coming out, and it seemed like the perfect approach for advancing the way we do neuroscience research; we could measure RNA at the single cell level and really understand what different cell types were there, including rare cells, and what they contribute to different brain functions. But there was a problem. Neurons do not look like regular cells: they are intricately connected. In the process of separating them, the cells do not stay intact and their RNA gets damaged, and this problem increases with age.

So what was your solution?

Habib: Isolating single neurons is problematic, but the nucleus is nice and round and relatively easy to isolate. That led us to ask, “Why not try single nucleus RNA sequencing instead of single cell sequencing?” We called it “sNuc-Seq.”

It worked well. We get a lot of information from the RNA in the nucleus; we can learn what cell type we’re looking at, what state of development it’s in, and what kind of processes are going on in the cell—all of the key information we would want to get from RNA sequencing.

Then, to make it possible to find the rare newborn neurons, we developed Div-Seq. It’s based on sNuc-Seq, but we introduce a compound that incorporates into DNA and labels the DNA while it’s replicating, so it’s specific for newly divided cells. Because we already isolated the nuclei, it’s fairly simple from there to fluorescently tag the labeled cells, sort them, and get RNA for sequencing.

You tested this method while preparing your paper. What did you find?

Habib: We studied “newborn” neurons from the brain across multiple time-points. We could see the changes in gene expression that occur throughout adult neurogenesis; the cells transition from state-to-state—from stem cells to mature neurons—and during these transitions, we found a coordinated change in the expression of hundreds of genes. It was beautiful to see these signatures, and they enabled us to pinpoint regulatory genes expressed during specific points of the cell differentiation process.

We were also able to look at where regeneration occurs. We decided to look in the spinal cord because there is a lot of interest in understanding the potential of regeneration to help with spinal cord injury. Div-Seq enabled us to scan millions of neurons and isolate the small percentage that were dividing and characterize each by its RNA signature. We found that within the spinal cord there is ongoing regeneration of a specific type of neuron—GABAergic neurons. That was an exciting finding that also showed the utility of our method.

Are the data you get from this method compatible with data from previous single-cell techniques?

Li: Because this method is specifically designed to address the particular challenges of profiling neurons, the data from this method is distinct from that obtained from previous single-cell techniques. Since the data was new to this approach, a novel computational tool was developed in this project in order to fully reveal the rich information, which is now available to the scientific community.

Are there other benefits of using this method?

Habib: Single nucleus RNA-seq enables the study of the adult and aging brain at the single cell level, which is now being applied to study cellular diversity across the brain during health and disease. Our approach also makes it easier to explore any complex tissue where single cells are hard to obtain for technical reasons. One important aspect is that it works on frozen and fixed tissue, which opens up opportunities to study human samples, such as biopsies, that may be collected overseas or frozen for days or even years.

Additionally, Div-Seq opens new ways to look at the rare process of adult neurogenesis and other regenerative processes that might have been challenging before. Because Div- Seq specifically labels dividing cells, it is a great tool to use to see what cells are dividing in a given tissue and to track gene expression changes over time.

What is the endgame of studying these processes? Can you put this work in context of human health and disease?

Li: We hope that the methods in this study will provide a starting point and method for future work on neuropsychiatric diseases. As we expand our understanding of cell types and their signatures, we can start to ask questions like: Which cells express disease associated genes? Where are these cells located in the brain? What other genes are expressed in these cells, and which might serve as potential drug targets? This approach could help bridge human genetic association studies and molecular neurobiology and open new windows into disease pathology and potential treatments.

Habib: These two methods together enable many applications, which were either very hard or impossible to do before. For example, we characterized the cellular diversity of a region of the brain important for learning and memory—the first region affected in Alzheimer’s disease. Having that understanding—knowing what the normal state of cells is at the molecular level and what went wrong in each individual cell type—can advance our understanding of the disease and perhaps aid in the search for a treatment. We are also excited by the prospect of finding naturally-occurring regeneration in the brain and spine, which could have implications for the field of regenerative medicine in treating, for example, neuronal degeneration or spinal injury.

Paper cited:

Habib N, Li Y, et al. Div-Seq: Single nucleus RNA-Seq reveals dynamics of rare adult newborn neurons. Science. Online July 28, 2016.

Seeing RNA at the nanoscale

Cells contain thousands of messenger RNA molecules, which carry copies of DNA’s genetic instructions to the rest of the cell. MIT engineers have now developed a way to visualize these molecules in higher resolution than previously possible in intact tissues, allowing researchers to precisely map the location of RNA throughout cells.

Key to the new technique is expanding the tissue before imaging it. By making the sample physically larger, it can be imaged with very high resolution using ordinary microscopes commonly found in research labs.

“Now we can image RNA with great spatial precision, thanks to the expansion process, and we also can do it more easily in large intact tissues,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, a member of MIT’s Media Lab and McGovern Institute for Brain Research, and the senior author of a paper describing the technique in the July 4 issue of Nature Methods.

Studying the distribution of RNA inside cells could help scientists learn more about how cells control their gene expression and could also allow them to investigate diseases thought to be caused by failure of RNA to move to the correct location.

Boyden and colleagues first described the underlying technique, known as expansion microscopy (ExM), last year, when they used it to image proteins inside large samples of brain tissue. In a paper appearing in Nature Biotechnology on July 4, the MIT team has now presented a new version of the technology that employs off-the-shelf chemicals, making it easier for researchers to use.

MIT graduate students Fei Chen and Asmamaw Wassie are the lead authors of the Nature Methods paper, and Chen and graduate student Paul Tillberg are the lead authors of the Nature Biotechnology paper.

A simpler process

The original expansion microscopy technique is based on embedding tissue samples in a polymer that swells when water is added. This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

However, that method posed some challenges because it requires generating a complicated chemical tag consisting of an antibody that targets a specific protein, linked to both a fluorescent dye and a chemical anchor that attaches the whole complex to a highly absorbent polymer known as polyacrylate. Once the targets are labeled, the researchers break down the proteins that hold the tissue sample together, allowing it to expand uniformly as the polyacrylate gel swells.

In their new studies, to eliminate the need for custom-designed labels, the researchers used a different molecule to anchor the targets to the gel before digestion. This molecule, which the researchers dubbed AcX, is commercially available and therefore makes the process much simpler.

AcX can be modified to anchor either proteins or RNA to the gel. In the Nature Biotechnology study, the researchers used it to anchor proteins, and they also showed that the technique works on tissue that has been previously labeled with either fluorescent antibodies or proteins such as green fluorescent protein (GFP).

“This lets you use completely off-the-shelf parts, which means that it can integrate very easily into existing workflows,” Tillberg says. “We think that it’s going to lower the barrier significantly for people to use the technique compared to the original ExM.”

Using this approach, it takes about an hour to scan a piece of tissue 500 by 500 by 200 microns, using a light sheet fluorescence microscope. The researchers showed that this technique works for many types of tissues, including brain, pancreas, lung, and spleen.

Imaging RNA

In the Nature Methods paper, the researchers used the same kind of anchoring molecule but modified it to target RNA instead. All of the RNAs in the sample are anchored to the gel, so they stay in their original locations throughout the digestion and expansion process.

After the tissue is expanded, the researchers label specific RNA molecules using a process known as fluorescence in situ hybridization (FISH), which was originally developed in the early 1980s and is widely used. This allows researchers to visualize the location of specific RNA molecules at high resolution, in three dimensions, in large tissue samples.

This enhanced spatial precision could allow scientists to explore many questions about how RNA contributes to cellular function. For example, a longstanding question in neuroscience is how neurons rapidly change the strength of their connections to store new memories or skills. One hypothesis is that RNA molecules encoding proteins necessary for plasticity are stored in cell compartments close to the synapses, poised to be translated into proteins when needed.

With the new system, it should be possible to determine exactly which RNA molecules are located near the synapses, waiting to be translated.
“People have found hundreds of these locally translated RNAs, but it’s hard to know where exactly they are and what they’re doing,” Chen says. “This technique would be useful to study that.”

Boyden’s lab is also interested in using this technology to trace the connections between neurons and to classify different subtypes of neurons based on which genes they are expressing.

The research was funded by the Open Philanthropy Project, the New York Stem Cell Foundation Robertson Award, the National Institutes of Health, the National Science Foundation, and Jeremy and Joyce Wertheimer.

From cancer to brain research: learning from worms

In Bob Horvitz’s lab, students watch tiny worms as they wriggle under the microscope. Their tracks twist and turn in every direction, and to a casual observer the movements appear random. There is a pattern, however, and the animals’ movements change depending on their environment and recent experiences.

“A hungry worm is different from a well-fed worm,” says Horvitz, David H. Koch Professor of Biology and a McGovern Investigator. “If you consider worm psychology, it seems that the thing in life worms care most about is food.”

Horvitz’s work with the nematode worm Caenorhabditis elegans extends back to the mid-1970s. He was among the first to recognize the value of this microscopic organism as a model species for asking fundamental questions about biology and human disease.

The leap from worm to human might seem great and perilous, but in fact they share many fundamental biological mechanisms, one of which is programmed cell death, also known as apoptosis. Horvitz shared the Nobel Prize in Physiology or Medicine in 2002 for his studies of cell death, which is central to a wide variety of human diseases, including cancer and neurodegenerative disorders. He has continued to study the worm ever since, contributing to many areas of biology but with a particular emphasis on the nervous system and the control of behavior.

In a recently published study, the Horvitz lab has found another fundamental mechanism that likely is shared with mice and humans. The discovery began with an observation by former graduate student Beth Sawin as she watched worms searching for food. When a hungry worm detects a food source, it slows almost to a standstill, allowing it to remain close to the food.
Postdoctoral scientist Nick Paquin analyzed how a mutation in a gene called vps-50, causes worms to slow similarly even when they are well fed. It seemed that these mutant worms were failing to transition normally between the hungry and the well-fed state.

Paquin decided to study the gene further, in worms and also in mouse neurons, the latter in collaboration with Yasunobu Murata, a former research scientist in Martha Constantine-Paton’s lab at the McGovern Institute. The team, later joined by postdoctoral fellow Fernando Bustos in the Constantine-Paton lab, found that the VPS-50 protein controls the activity of synapses, the junctions between nerve cells. VPS-50 is involved in a process that acidifies synaptic vesicles, microscopic bubbles filled with neurotransmitters that are released from nerve terminals, sending signals to other nearby neurons.

If VPS-50 is missing, the vesicles do not mature properly and the signaling from neurons is abnormal. VPS-50 has remained relatively unchanged during evolution, and the mouse version can
substitute for the missing worm gene, indicating the worm and mouse proteins are similar not only in sequence but also in function. This might seem surprising given the wide gap between the tiny nervous system of the worm and the complex brains of mammals. But it is not surprising to Horvitz, who has committed about half of his lab resources to studying the worm’s nervous system and behavior.

“Our finding underscores something that I think is crucially important,” he says. “A lot of biology is conserved among organisms that appear superficially very different, which means that the
understanding and treatment of human diseases can be advanced by studies of simple organisms like worms.”

Human connections

In addition to its significance for normal synaptic function, the vps-50 gene might be important in autism spectrum disorder. Several autism patients have been described with deletions that include vps-50, and other lines of evidence also suggest a link to autism. “We think this is going to be a very important molecule in mammals,” says Constantine-Paton. “We’re now in a position to look into the function of vps-50 more deeply.”

Horvitz and Constantine-Paton are married, and they had chatted about vps-50 long before her lab began to study it. When it became clear that the mutation was affecting worm neurons in a novel way, it was a natural decision to collaborate and study the gene in mice. They are currently working to understand the role of VPS-50 in mammalian brain function, and to explore further the possible link to autism.

The day the worm turned

A latecomer to biology, Horvitz studied mathematics and economics as an undergraduate at MIT in the mid-1960s. During his last year, he took a few biology classes and then went on to earn
a doctoral degree in the field at Harvard University, working in the lab of James Watson (of double helix fame) and Walter Gilbert. In 1974, Horvitz moved to Cambridge, England, where he worked with Sydney Brenner and began his studies of the worm.

“Remarkably, all of my advisors, even my undergraduate advisor in economics here at MIT, Bob Solow, now have Nobel Prizes,” he notes.

The comment is matter-of-fact, and Horvitz is anything but pretentious. He thinks about both big questions and small experimental details and is always on the lookout for links between the
worm and human health.

“When someone in the lab finds something new, Bob is quick to ask if it relates to human disease,” says former graduate student Nikhil Bhatla. “We’re not thinking about that. We’re deep in
the nitty-gritty, but he’s directing us to potential collaborators who might help us make that link.”

This kind of mentoring, says Horvitz, has been his primary role since he joined the MIT faculty in 1978. He has trained many of the current leaders in the worm field, including Gary Ruvkun
and Victor Ambros, who shared the 2008 Lasker Award, Michael Hengartner, now President of the University of Zurich, and Cori Bargmann, who recently won the McGovern’s 2016 Scolnick Prize in Neuroscience.

“If the science we’ve done has been successful, it’s because I’ve been lucky to have outstanding young researchers as colleagues,” Horvitz says.

Before becoming a mentor, Horvitz had to become a scientist himself. At Harvard, he studied bacterial viruses and learned that even the simplest organisms could provide valuable insights about fundamental biological processes.

The move to Brenner’s lab in Cambridge was a natural step. A pioneer in the field of molecular biology, Brenner was also the driving force behind the adoption of C. elegans as a genetic model organism, which he advocated for its simplicity (adults have fewer than 1000 cells, and only 302 neurons) and short generation time (only three days). Working in Brenner’s lab, Horvitz
and his collaborator John Sulston traced the lineage of every body cell from fertilization to adulthood, showing that the sequence of cell divisions was the same in each individual animal. Their landmark study provided a foundation for the entire field. “They know all the cells in the worm. Every single one,” says Constantine-Paton. “So when they make a mutation and something is weird, they can determine precisely which cell or set of cells are affected. We can only dream of having such an understanding of a mammal.”

It is now known that the worm has about 20,000 genes, many of which are conserved in mammals including humans. In fact, in many cases, a cloned human gene can stand in for a missing
worm gene, as is the case for vps-50. As a result, the worm has been a powerful discovery machine for human biology. In the early years, though, many doubted whether worms would be relevant. Horvitz persisted undeterred, and in 1992 his conviction paid off, with the discovery of ced-9, a worm gene that regulates programmed cell death. A graduate student in Horvitz’ lab cloned ced-9 and saw that it resembled a human cancer gene called Bcl-2. They also showed that human Bcl-2 could substitute for a mutant ced-9 gene in the worm and concluded that the two genes have similar functions: ced-9 in worms protects healthy cells from death, and Bcl-2 in cancer patients protects cancerous cells from death, allowing them to multiply. “This was the moment we knew that the studies we’d been doing with C. elegans were going to be relevant to understanding human biology and disease,” says Horvitz.

Ten years later, in 2002, he was in the French Alps with Constantine-Paton and their daughter Alex attending a wedding, when they heard the news on the radio: He’d won a Nobel Prize, along with Brenner and Sulston. On the return trip, Alex, then 9 years old but never shy, asked for first-class upgrades at the airport; the agent compromised and gave them all upgrades to business class instead.

Discovery machine at work

Since the Nobel Prize, Horvitz has studied the nervous system using the same strategy that had been so successful in deciphering the mechanism of programmed cell death. His approach, he says, begins with traditional genetics. Researchers expose worms to mutagens and observe their behavior. When they see an interesting change, they identify the mutation and try to link the gene to the nervous system to understand how it affects behavior.

“We make no assumptions,” he says. “We let the animal tell us the answer.”

While Horvitz continues to demonstrate that basic research using simple organisms produces invaluable insights about human biology and health, there are other forces at work in his lab. Horvitz maintains a sense of wonder about life and is undaunted by big questions.

For instance, when Bhatla came to him wanting to look for evidence of consciousness in worms, Horvitz blinked but didn’t say no. The science Bhatla proposed was novel, and the question
was intriguing. Bhatla pursued it. But, he says, “It didn’t work.”

So Bhatla went back to the drawing board. During his earlier experiments, he had observed that worms would avoid light, a previously known behavior. But he also noticed that they immediately stopped feeding. The animals had provided a clue. Bhatla went on to discover that worms respond to light by producing hydrogen peroxide, which activates a taste receptor.

In a sense, worms taste light, a wonder of biology no one could have predicted.

Some years ago, the Horvitz lab made t-shirts displaying a quote from the philosopher Friedrich Nietzsche: “You have made your way from worm to man, and much within you is still worm.”
The words have become an informal lab motto, “truer than Nietzsche could everhave imagined,” says Horvitz. “There’s still so much mystery, particularly about the brain, and we are still learning from the worm.”

Controlling RNA in living cells

MIT researchers have devised a new set of proteins that can be customized to bind arbitrary RNA sequences, making it possible to image RNA inside living cells, monitor what a particular RNA strand is doing, and even control RNA activity.

The new strategy is based on human RNA-binding proteins that normally help guide embryonic development. The research team adapted the proteins so that they can be easily targeted to desired RNA sequences.

“You could use these proteins to do measurements of RNA generation, for example, or of the translation of RNA to proteins,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab. “This could have broad utility throughout biology and bioengineering.”

Unlike previous efforts to control RNA with proteins, the new MIT system consists of modular components, which the researchers believe will make it easier to perform a wide variety of RNA manipulations.

“Modularity is one of the core design principles of engineering. If you can make things out of repeatable parts, you don’t have to agonize over the design. You simply build things out of predictable, linkable units,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research.

Boyden is the senior author of a paper describing the new system in the Proceedings of the National Academy of Sciences. The paper’s lead authors are postdoc Katarzyna Adamala and grad student Daniel Martin-Alarcon.

Modular code

Living cells contain many types of RNA that perform different roles. One of the best known varieties is messenger RNA (mRNA), which is copied from DNA and carries protein-coding information to cell structures called ribosomes, where mRNA directs protein assembly in a process called translation. Monitoring mRNA could tell scientists a great deal about which genes are being expressed in a cell, and tweaking the translation of mRNA would allow them to alter gene expression without having to modify the cell’s DNA.

To achieve this, the MIT team set out to adapt naturally occurring proteins called Pumilio homology domains. These RNA-binding proteins include sequences of amino acids that bind to one of the ribonucleotide bases or “letters” that make up RNA sequences — adenine (A), thymine (T), uracil (U), and guanine (G).

In recent years, scientists have been working on developing these proteins for experimental use, but until now it was more of a trial-and-error process to create proteins that would bind to a particular RNA sequence.

“It was not a truly modular code,” Boyden says, referring to the protein’s amino acid sequences. “You still had to tweak it on a case-by-case basis. Whereas now, given an RNA sequence, you can specify on paper a protein to target it.”

To create their code, the researchers tested out many amino acid combinations and found a particular set of amino acids that will bind each of the four bases at any position in the target sequence. Using this system, which they call Pumby (for Pumilio-based assembly), the researchers effectively targeted RNA sequences varying in length from six to 18 bases.

“I think it’s a breakthrough technology that they’ve developed here,” says Robert Singer, a professor of anatomy and structural biology, cell biology, and neuroscience at Albert Einstein College of Medicine, who was not involved in the research. “Everything that’s been done to target RNA so far requires modifying the RNA you want to target by attaching a sequence that binds to a specific protein. With this technique you just design the protein alone, so there’s no need to modify the RNA, which means you could target any RNA in any cell.”

RNA manipulation

In experiments in human cells grown in a lab dish, the researchers showed that they could accurately label mRNA molecules and determine how frequently they are being translated. First, they designed two Pumby proteins that would bind to adjacent RNA sequences. Each protein is also attached to half of a green fluorescent protein (GFP) molecule. When both proteins find their target sequence, the GFP molecules join and become fluorescent — a signal to the researchers that the target RNA is present.

Furthermore, the team discovered that each time an mRNA molecule is translated, the GFP gets knocked off, and when translation is finished, another GFP binds to it, enhancing the overall fluorescent signal. This allows the researchers to calculate how often the mRNA is being read.

This system can also be used to stimulate translation of a target mRNA. To achieve that, the researchers attached a protein called a translation initiator to the Pumby protein. This allowed them to dramatically increase translation of an mRNA molecule that normally wouldn’t be read frequently.

“We can turn up the translation of arbitrary genes in the cell without having to modify the genome at all,” Martin-Alarcon says.

The researchers are now working toward using this system to label different mRNA molecules inside neurons, allowing them to test the idea that mRNAs for different genes are stored in different parts of the neuron, helping the cell to remain poised to perform functions such as storing new memories. “Until now it’s been very difficult to watch what’s happening with those mRNAs, or to control them,” Boyden says.

These RNA-binding proteins could also be used to build molecular assembly lines that would bring together enzymes needed to perform a series of reactions that produce a drug or another molecule of interest.

Study reveals a basis for attention deficits

More than 3 million Americans suffer from attention deficit hyperactivity disorder (ADHD), a condition that usually emerges in childhood and can lead to difficulties at school or work.

A new study from MIT and New York University links ADHD and other attention difficulties to the brain’s thalamic reticular nucleus (TRN), which is responsible for blocking out distracting sensory input. In a study of mice, the researchers discovered that a gene mutation found in some patients with ADHD produces a defect in the TRN that leads to attention impairments.

The findings suggest that drugs boosting TRN activity could improve ADHD symptoms and possibly help treat other disorders that affect attention, including autism.

“Understanding these circuits may help explain the converging mechanisms across these disorders. For autism, schizophrenia, and other neurodevelopmental disorders, it seems like TRN dysfunction may be involved in some patients,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng and Michael Halassa, an assistant professor of psychiatry, neuroscience, and physiology at New York University, are the senior authors of the study, which appears in the March 23 online edition of Nature. The paper’s lead authors are MIT graduate student Michael Wells and NYU postdoc Ralf Wimmer.

Paying attention

Feng, Halassa, and their colleagues set out to study a gene called Ptchd1, whose loss can produce attention deficits, hyperactivity, intellectual disability, aggression, and autism spectrum disorders. Because the gene is carried on the X chromosome, most individuals with these Ptchd1-related effects are male.

In mice, the researchers found that the part of the brain most affected by the loss of Ptchd1 is the TRN, which is a group of inhibitory nerve cells in the thalamus. It essentially acts as a gatekeeper, preventing unnecessary information from being relayed to the brain’s cortex, where higher cognitive functions such as thought and planning occur.

“We receive all kinds of information from different sensory regions, and it all goes into the thalamus,” Feng says. “All this information has to be filtered. Not everything we sense goes through.”

If this gatekeeper is not functioning properly, too much information gets through, allowing the person to become easily distracted or overwhelmed. This can lead to problems with attention and difficulty in learning.

The researchers found that when the Ptchd1 gene was knocked out in mice, the animals showed many of the same behavioral defects seen in human patients, including aggression, hyperactivity, attention deficit, and motor impairments. When the Ptchd1 gene was knocked out only in the TRN, the mice showed only hyperactivity and attention deficits.

Toward new treatments

At the cellular level, the researchers found that the Ptchd1 mutation disrupts channels that carry potassium ions, which prevents TRN neurons from being able to sufficiently inhibit thalamic output to the cortex. The researchers were also able restore the neurons’ normal function with a compound that boosts activity of the potassium channel. This intervention reversed the TRN-related symptoms but not any of the symptoms that appear to be caused by deficits of some other circuit.

“The authors convincingly demonstrate that specific behavioral consequences of the Ptchd1 mutation — attention and sleep — arise from an alteration of a specific protein in a specific brain region, the thalamic reticular nucleus. These findings provide a clear and straightforward pathway from gene to behavior and suggest a pathway toward novel treatments for neurodevelopmental disorders such as autism,” says Joshua Gordon, an associate professor of psychiatry at Columbia University, who was not involved in the research.

Most people with ADHD are now treated with psychostimulants such as Ritalin, which are effective in about 70 percent of patients. Feng and Halassa are now working on identifying genes that are specifically expressed in the TRN in hopes of developing drug targets that would modulate TRN activity. Such drugs may also help patients who don’t have the Ptchd1 mutation, because their symptoms are also likely caused by TRN impairments, Feng says.

The researchers are also investigating when Ptchd1-related problems in the TRN arise and at what point they can be reversed. And, they hope to discover how and where in the brain Ptchd1 mutations produce other abnormalities, such as aggression.

The research was funded by the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Poitras Center for Affective Disorders Research, and the Stanley Center for Psychiatric Research at the Broad Institute.

Neuroscientists discover a gene that controls worms’ behavioral state

In a study of worms, MIT neuroscientists have discovered a gene that plays a critical role in controlling the switch between alternative behavioral states, which for humans include hunger and fullness, or sleep and wakefulness.

This gene, which the researchers dubbed vps-50, helps to regulate neuropeptides — tiny proteins that carry messages between neurons or from neurons to other cells. This kind of signaling is important for controlling physiology and behavior in animals, including humans. Deletions of the human counterpart of the vps-50 gene have been found in some people with autism.

“Given what is reported in this paper about how the gene works, coupled with findings by others concerning the genetics of autism, we suggest that the disruption of the function of this gene could promote autism,” says H. Robert Horvitz, the David H. Koch Professor of Biology and a member of MIT’s McGovern Institute for Brain Research.

Horvitz and Martha Constantine-Paton, an MIT professor of brain and cognitive sciences and member of the McGovern Institute, are the senior authors of the study, which appears in the March 3 issue of the journal Current Biology. The paper’s lead authors are former MIT postdocs Nicolas Paquin and Yasunobu Muruta.

Influencing behavior

Neuropeptides, which are involved in brain functions such as reward, metabolism, and learning and memory, are released from cellular structures called dense-core vesicles.

In the new study, the researchers found that the vps-50 gene encodes a protein that is important in the generation of such vesicles and in the release of neuropeptides from them.

They discovered the protein in the worm Caenorhabditis elegans, where it is found primarily in nerve cells. In those cells, vps-50 associates with both synaptic vesicles and dense-core vesicles, which release neurotransmitters such as dopamine and serotonin. The researchers showed that vps-50 is required for maturation of the dense-core vesicles and also regulates activity of a proton pump that acidifies the vesicles. Without the proper acidity level, the vesicles’ ability to produce neuropeptides is impaired.

The researchers also found distinctive behavioral effects in C. elegans worms, which normally change their speed depending on food availability and whether they have recently eaten.

“Worms are the fastest when food (bacteria) is absent, presumably because they are looking for food,” Paquin says. “When they reach food, they slow down, but when you make them hungry for 30 minutes before putting them on food, they slow down even more.”

Worms lacking vps-50 behaved as if they were hungry — moving slowly through a food-rich area even when they were well fed, the researchers found. This suggests that the worms without vps-50 are unable to signal that they are full and continue to behave as if they are hungry. The researchers also found an equivalent gene in mice and showed that it can compensate for loss of the worm version of vps-50, showing that the two genes have the same function.

Human link

One important question raised by the study is how the mouse and human versions of vps-50 affect behavior in those animals, Horvitz says. Although this study focused on switching between hunger and fullness, neuropeptide signaling has been previously shown to control other alternative behaviors such as sleep and wakefulness and also to control social behaviors, such as anxiety.

The researchers suggest that studies of vps-50 might shed light on aspects of autism, because the human version of the gene is missing in some people with autism. Furthermore, a protein known as UNC-31, which is also located in dense-core vesicles has also been linked with autism in humans and mice. When mutated in worms, UNC-31 produces behavioral effects similar to those caused by vps-50 mutations.

“For these reasons, we hope that our studies of vps-50 will provide insights into human neuropsychiatric disorders,” Horvitz says.

The research was funded by the National Institutes of Health and the Simons Center for the Social Brain at MIT.