Precise technique tracks dopamine in the brain

MIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

“Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

“Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

Studying the striatum

For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

“What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

How learning happens

The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

“We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

“What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”

Other authors of the paper are McGovern Institute research scientists Minjung Kim, Satoko Amemori, and Hideki Shimazu; McGovern Institute postdoc Daigo Homma; McGovern Institute technical associate Tomoko Yoshida; and undergraduates Harshita Yerramreddy and Ekin Karasan.

The research was funded by the National Institutes of Health, the National Institute of Biomedical Imaging and Bioengineering, and the National Institute of Neurological Disorders and Stroke.

Researchers create synthetic cells to isolate genetic circuits

Synthetic biology allows scientists to design genetic circuits that can be placed in cells, giving them new functions such as producing drugs or other useful molecules. However, as these circuits become more complex, the genetic components can interfere with each other, making it difficult to achieve more complicated functions.

MIT researchers have now demonstrated that these circuits can be isolated within individual synthetic “cells,” preventing them from disrupting each other. The researchers can also control communication between these cells, allowing for circuits or their products to be combined at specific times.

“It’s a way of having the power of multicomponent genetic cascades, along with the ability to build walls between them so they won’t have cross-talk. They won’t interfere with each other in the way they would if they were all put into a single cell or into a beaker,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. Boyden is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

This approach could allow researchers to design circuits that manufacture complex products or act as sensors that respond to changes in their environment, among other applications.

Boyden is the senior author of a paper describing this technique in the Nov. 14 issue of Nature Chemistry. The paper’s lead authors are former MIT postdoc Kate Adamala, who is now an assistant professor at the University of Minnesota, and former MIT grad student Daniel Martin-Alarcon. Katriona Guthrie-Honea, a former MIT research assistant, is also an author of the paper.

Circuit control

The MIT team encapsulated their genetic circuits in droplets known as liposomes, which have a fatty membrane similar to cell membranes. These synthetic cells are not alive but are equipped with much of the cellular machinery necessary to read DNA and manufacture proteins.

By segregating circuits within their own liposomes, the researchers are able to create separate circuit subroutines that could not run in the same container at the same time, but can run in parallel to each other, communicating in controlled ways. This approach also allows scientists to repurpose the same genetic tools, including genes and transcription factors (proteins that turn genes on or off), to do different tasks within a network.

“If you separate circuits into two different liposomes, you could have one tool doing one job in one liposome, and the same tool doing a different job in the other liposome,” Martin-Alarcon says. “It expands the number of things that you can do with the same building blocks.”

This approach also enables communication between circuits from different types of organisms, such as bacteria and mammals.

As a demonstration, the researchers created a circuit that uses bacterial genetic parts to respond to a molecule known as theophylline, a drug similar to caffeine. When this molecule is present, it triggers another molecule known as doxycycline to leave the liposome and enter another set of liposomes containing a mammalian genetic circuit. In those liposomes, doxycycline activates a genetic cascade that produces luciferase, a protein that generates light.

Using a modified version of this approach, scientists could create circuits that work together to produce biological therapeutics such as antibodies, after sensing a particular molecule emitted by a brain cell or other cell.

“If you think of the bacterial circuit as encoding a computer program, and the mammalian circuit is encoding the factory, you could combine the computer code of the bacterial circuit and the factory of the mammalian circuit into a unique hybrid system,” Boyden says.

The researchers also designed liposomes that can fuse with each other in a controlled way. To do that, they programmed the cells with proteins called SNAREs, which insert themselves into the cell membrane. There, they bind to corresponding SNAREs found on surfaces of other liposomes, causing the synthetic cells to fuse. The timing of this fusion can be controlled to bring together liposomes that produce different molecules. When the cells fuse, these molecules are combined to generate a final product.

More modularity

The researchers believe this approach could be used for nearly any application that synthetic biologists are already working on. It could also allow scientists to pursue potentially useful applications that have been tried before but abandoned because the genetic circuits interfered with each other too much.

“The way that we wrote this paper was not oriented toward just one application,” Boyden says. “The basic question is: Can you make these circuits more modular? If you have everything mishmashed together in the cell, but you find out that the circuits are incompatible or toxic, then putting walls between those reactions and giving them the ability to communicate with each other could be very useful.”

Vincent Noireaux, an associate professor of physics at the University of Minnesota, described the MIT approach as “a rather novel method to learn how biological systems work.”

“Using cell-free expression has several advantages: Technically the work is reduced to cloning (nowadays fast and easy), we can link information processing to biological function like living cells do, and we work in isolation with no other gene expression occurring in the background,” says Noireaux, who was not involved in the research.

Another possible application for this approach is to help scientists explore how the earliest cells may have evolved billions of years ago. By engineering simple circuits into liposomes, researchers could study how cells might have evolved the ability to sense their environment, respond to stimuli, and reproduce.

“This system can be used to model the behavior and properties of the earliest organisms on Earth, as well as help establish the physical boundaries of Earth-type life for the search of life elsewhere in the solar system and beyond,” Adamala says.

A new player in appetite control

MIT neuroscientists have discovered that brain cells called glial cells play a critical role in controlling appetite and feeding behavior. In a study of mice, the researchers found that activating these cells stimulates overeating, and that when the cells are suppressed, appetite is also suppressed.

The findings could offer scientists a new target for developing drugs against obesity and other appetite-related disorders, the researchers say. The study is also the latest in recent years to implicate glial cells in important brain functions. Until about 10 years ago, glial cells were believed to play more of a supporting role for neurons.

“In the last few years, abnormal glial cell activities have been strongly implicated in neurodegenerative disorders. There is more and more evidence to point to the importance of glial cells in modulating neuronal function and in mediating brain disorders,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience. Feng is also a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng is one of the senior authors of the study, which appears in the Oct. 18 edition of the journal eLife. The other senior author is Weiping Han, head of the Laboratory of Metabolic Medicine at the Singapore Bioimaging Consortium in Singapore. Naiyan Chen, a postdoc at the Singapore Bioimaging Consortium and the McGovern Institute, is the lead author.

Turning on appetite

It has long been known that the hypothalamus, an almond-sized structure located deep within the brain, controls appetite as well as energy expenditure, body temperature, and circadian rhythms including sleep cycles. While performing studies on glial cells in other parts of the brain, Chen noticed that the hypothalamus also appeared to have a lot of glial cell activity.

“I was very curious at that point what glial cells would be doing in the hypothalamus, since glial cells have been shown in other brain areas to have an influence on regulation of neuronal function,” she says.

Within the hypothalamus, scientists have identified two key groups of neurons that regulate appetite, known as AgRP neurons and POMC neurons. AgRP neurons stimulate feeding, while POMC neurons suppress appetite.

Until recently it has been difficult to study the role of glial cells in controlling appetite or any other brain function, because scientists haven’t developed many techniques for silencing or stimulating these cells, as they have for neurons. Glial cells, which make up about half of the cells in the brain, have many supporting roles, including cushioning neurons and helping them form connections with one another.

In this study, the research team used a new technique developed at the University of North Carolina to study a type of glial cell known as an astrocyte. Using this strategy, researchers can engineer specific cells to produce a surface receptor that binds to a chemical compound known as CNO, a derivative of clozapine. Then, when CNO is given, it activates the glial cells.

The MIT team found that turning on astrocyte activity with just a single dose of CNO had a significant effect on feeding behavior.

“When we gave the compound that specifically activated the receptors, we saw a robust increase in feeding,” Chen says. “Mice are not known to eat very much in the daytime, but when we gave drugs to these animals that express a particular receptor, they were eating a lot.”

The researchers also found that in the short term (three days), the mice did not gain extra weight, even though they were eating more.

“This raises the possibility that glial cells may also be modulating neurons that control energy expenditures, to compensate for the increased food intake,” Chen says. “They might have multiple neuronal partners and modulate multiple energy homeostasis functions all at the same time.”

When the researchers silenced activity in the astrocytes, they found that the mice ate less than normal.

Suzanne Dickson, a professor of neuroendocrinology at the University of Gothenburg in Sweden described the study as part of a “paradigm shift” toward the idea that glial cells have a less passive role than previously believed.

“We tend to think of glial cells as providing a support network for neuronal processes and that their activation is also important in certain forms of brain trauma or inflammation,” says Dickson, who was not involved in the research. “This study adds to the emerging evidence base that glial cells may also exert specific effects to control nerve cell function in normal physiology.”

Unknown interactions

Still unknown is how the astrocytes exert their effects on neurons. Some recent studies have suggested that glial cells can secrete chemical messengers such as glutamate and ATP; if so, these “gliotransmitters” could influence neuron activity.

Another hypothesis is that instead of secreting chemicals, astrocytes exert their effects by controlling the uptake of neurotransmitters from the space surrounding neurons, thereby affecting neuron activity indirectly.

Feng now plans to develop new research tools that could help scientists learn more about astrocyte-neuron interactions and how astrocytes contribute to modulation of appetite and feeding. He also hopes to learn more about whether there are different types of astrocytes that may contribute differently to feeding behavior, especially abnormal behavior.

“We really know very little about how astrocytes contribute to the modulation of appetite, eating, and metabolism,” he says. “In the future, dissecting out these functional difference will be critical for our understanding of these disorders.”

Newly discovered neural connections may be linked to emotional decision-making

MIT neuroscientists have discovered connections deep within the brain that appear to form a communication pathway between areas that control emotion, decision-making, and movement. The researchers suspect that these connections, which they call striosome-dendron bouquets, may be involved in controlling how the brain makes decisions that are influenced by emotion or anxiety.

This circuit may also be one of the targets of the neural degeneration seen in Parkinson’s disease, says Ann Graybiel, an Institute Professor at MIT, member of the McGovern Institute for Brain Research, and the senior author of the study.

Graybiel and her colleagues were able to find these connections using a technique developed at MIT known as expansion microscopy, which enables scientists to expand brain tissue before imaging it. This produces much higher-resolution images than would otherwise be possible with conventional microscopes.

That technique was developed in the lab of Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab, who is also an author of this study. Jill Crittenden, a research scientist at the McGovern Institute, is the lead author of the paper, which appears in the Proceedings of the National Academy of Sciences the week of Sept. 19.

Tracing a circuit

In this study, the researchers focused on a small region of the brain known as the striatum, which is part of the basal ganglia — a cluster of brain centers associated with habit formation, control of voluntary movement, emotion, and addiction. Malfunctions of the basal ganglia have been associated with Parkinson’s and Huntington’s diseases, as well as autism, obsessive-compulsive disorder, and Tourette’s syndrome.

Much of the striatum is uncharted territory, but Graybiel’s lab has previously identified clusters of cells there known as striosomes. She also found that these clusters receive very specific input from parts of the brain’s prefrontal cortex involved in processing emotions, and showed that this communication pathway is necessary for making decisions that require an anxiety-provoking cost-benefit analysis, such as choosing whether to take a job that pays more but forces a move away from family and friends.

Her studies also suggested that striosomes relay information to cells within a region called the substantia nigra, one of the brain’s main dopamine-producing centers. Dopamine has many functions in the brain, including roles in initiating movement and regulating mood.

To figure out how these regions might be communicating, Graybiel, Crittenden, and their colleagues used expansion microscopy to image the striosomes and discovered extensive connections between those clusters of cells and dopamine-producing cells of the substantia nigra. The dopamine-producing cells send down many tiny extensions known as dendrites that become entwined with axons that come up to meet them from the striosomes, forming a bouquet-like structure.

“With expansion microscopy, we could finally see direct connections between these cells by unraveling their unusual rope-like bundles of axons and dendrites,” Crittenden says. “What’s really exciting to us is we can see that it’s small discrete clusters of dopamine cells with bundles that are being targeted.”

Hard decisions

This finding expands the known decision-making circuit so that it encompasses the prefrontal cortex, striosomes, and a subset of dopamine-producing cells. Together, the striosomes may be acting as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, which is initiated by the dopamine-producing cells, the researchers say.

To explore that possibility, the researchers plan to study mice in which they can selectively activate or shut down the striosome-dendron bouquet as the mice are prompted to make decisions requiring a cost-benefit analysis.

The researchers also plan to investigate whether these connections are disrupted in mouse models of Parkinson’s disease. MRI studies and postmortem analysis of brains of Parkinson’s patients have shown that death of dopamine cells in the substantia nigra is strongly correlated with the disease, but more work is needed to determine if this subset overlaps with the dopamine cells that form the striosome-dendron bouquets.

From cancer to brain research: learning from worms

In Bob Horvitz’s lab, students watch tiny worms as they wriggle under the microscope. Their tracks twist and turn in every direction, and to a casual observer the movements appear random. There is a pattern, however, and the animals’ movements change depending on their environment and recent experiences.

“A hungry worm is different from a well-fed worm,” says Horvitz, David H. Koch Professor of Biology and a McGovern Investigator. “If you consider worm psychology, it seems that the thing in life worms care most about is food.”

Horvitz’s work with the nematode worm Caenorhabditis elegans extends back to the mid-1970s. He was among the first to recognize the value of this microscopic organism as a model species for asking fundamental questions about biology and human disease.

The leap from worm to human might seem great and perilous, but in fact they share many fundamental biological mechanisms, one of which is programmed cell death, also known as apoptosis. Horvitz shared the Nobel Prize in Physiology or Medicine in 2002 for his studies of cell death, which is central to a wide variety of human diseases, including cancer and neurodegenerative disorders. He has continued to study the worm ever since, contributing to many areas of biology but with a particular emphasis on the nervous system and the control of behavior.

In a recently published study, the Horvitz lab has found another fundamental mechanism that likely is shared with mice and humans. The discovery began with an observation by former graduate student Beth Sawin as she watched worms searching for food. When a hungry worm detects a food source, it slows almost to a standstill, allowing it to remain close to the food.
Postdoctoral scientist Nick Paquin analyzed how a mutation in a gene called vps-50, causes worms to slow similarly even when they are well fed. It seemed that these mutant worms were failing to transition normally between the hungry and the well-fed state.

Paquin decided to study the gene further, in worms and also in mouse neurons, the latter in collaboration with Yasunobu Murata, a former research scientist in Martha Constantine-Paton’s lab at the McGovern Institute. The team, later joined by postdoctoral fellow Fernando Bustos in the Constantine-Paton lab, found that the VPS-50 protein controls the activity of synapses, the junctions between nerve cells. VPS-50 is involved in a process that acidifies synaptic vesicles, microscopic bubbles filled with neurotransmitters that are released from nerve terminals, sending signals to other nearby neurons.

If VPS-50 is missing, the vesicles do not mature properly and the signaling from neurons is abnormal. VPS-50 has remained relatively unchanged during evolution, and the mouse version can
substitute for the missing worm gene, indicating the worm and mouse proteins are similar not only in sequence but also in function. This might seem surprising given the wide gap between the tiny nervous system of the worm and the complex brains of mammals. But it is not surprising to Horvitz, who has committed about half of his lab resources to studying the worm’s nervous system and behavior.

“Our finding underscores something that I think is crucially important,” he says. “A lot of biology is conserved among organisms that appear superficially very different, which means that the
understanding and treatment of human diseases can be advanced by studies of simple organisms like worms.”

Human connections

In addition to its significance for normal synaptic function, the vps-50 gene might be important in autism spectrum disorder. Several autism patients have been described with deletions that include vps-50, and other lines of evidence also suggest a link to autism. “We think this is going to be a very important molecule in mammals,” says Constantine-Paton. “We’re now in a position to look into the function of vps-50 more deeply.”

Horvitz and Constantine-Paton are married, and they had chatted about vps-50 long before her lab began to study it. When it became clear that the mutation was affecting worm neurons in a novel way, it was a natural decision to collaborate and study the gene in mice. They are currently working to understand the role of VPS-50 in mammalian brain function, and to explore further the possible link to autism.

The day the worm turned

A latecomer to biology, Horvitz studied mathematics and economics as an undergraduate at MIT in the mid-1960s. During his last year, he took a few biology classes and then went on to earn
a doctoral degree in the field at Harvard University, working in the lab of James Watson (of double helix fame) and Walter Gilbert. In 1974, Horvitz moved to Cambridge, England, where he worked with Sydney Brenner and began his studies of the worm.

“Remarkably, all of my advisors, even my undergraduate advisor in economics here at MIT, Bob Solow, now have Nobel Prizes,” he notes.

The comment is matter-of-fact, and Horvitz is anything but pretentious. He thinks about both big questions and small experimental details and is always on the lookout for links between the
worm and human health.

“When someone in the lab finds something new, Bob is quick to ask if it relates to human disease,” says former graduate student Nikhil Bhatla. “We’re not thinking about that. We’re deep in
the nitty-gritty, but he’s directing us to potential collaborators who might help us make that link.”

This kind of mentoring, says Horvitz, has been his primary role since he joined the MIT faculty in 1978. He has trained many of the current leaders in the worm field, including Gary Ruvkun
and Victor Ambros, who shared the 2008 Lasker Award, Michael Hengartner, now President of the University of Zurich, and Cori Bargmann, who recently won the McGovern’s 2016 Scolnick Prize in Neuroscience.

“If the science we’ve done has been successful, it’s because I’ve been lucky to have outstanding young researchers as colleagues,” Horvitz says.

Before becoming a mentor, Horvitz had to become a scientist himself. At Harvard, he studied bacterial viruses and learned that even the simplest organisms could provide valuable insights about fundamental biological processes.

The move to Brenner’s lab in Cambridge was a natural step. A pioneer in the field of molecular biology, Brenner was also the driving force behind the adoption of C. elegans as a genetic model organism, which he advocated for its simplicity (adults have fewer than 1000 cells, and only 302 neurons) and short generation time (only three days). Working in Brenner’s lab, Horvitz
and his collaborator John Sulston traced the lineage of every body cell from fertilization to adulthood, showing that the sequence of cell divisions was the same in each individual animal. Their landmark study provided a foundation for the entire field. “They know all the cells in the worm. Every single one,” says Constantine-Paton. “So when they make a mutation and something is weird, they can determine precisely which cell or set of cells are affected. We can only dream of having such an understanding of a mammal.”

It is now known that the worm has about 20,000 genes, many of which are conserved in mammals including humans. In fact, in many cases, a cloned human gene can stand in for a missing
worm gene, as is the case for vps-50. As a result, the worm has been a powerful discovery machine for human biology. In the early years, though, many doubted whether worms would be relevant. Horvitz persisted undeterred, and in 1992 his conviction paid off, with the discovery of ced-9, a worm gene that regulates programmed cell death. A graduate student in Horvitz’ lab cloned ced-9 and saw that it resembled a human cancer gene called Bcl-2. They also showed that human Bcl-2 could substitute for a mutant ced-9 gene in the worm and concluded that the two genes have similar functions: ced-9 in worms protects healthy cells from death, and Bcl-2 in cancer patients protects cancerous cells from death, allowing them to multiply. “This was the moment we knew that the studies we’d been doing with C. elegans were going to be relevant to understanding human biology and disease,” says Horvitz.

Ten years later, in 2002, he was in the French Alps with Constantine-Paton and their daughter Alex attending a wedding, when they heard the news on the radio: He’d won a Nobel Prize, along with Brenner and Sulston. On the return trip, Alex, then 9 years old but never shy, asked for first-class upgrades at the airport; the agent compromised and gave them all upgrades to business class instead.

Discovery machine at work

Since the Nobel Prize, Horvitz has studied the nervous system using the same strategy that had been so successful in deciphering the mechanism of programmed cell death. His approach, he says, begins with traditional genetics. Researchers expose worms to mutagens and observe their behavior. When they see an interesting change, they identify the mutation and try to link the gene to the nervous system to understand how it affects behavior.

“We make no assumptions,” he says. “We let the animal tell us the answer.”

While Horvitz continues to demonstrate that basic research using simple organisms produces invaluable insights about human biology and health, there are other forces at work in his lab. Horvitz maintains a sense of wonder about life and is undaunted by big questions.

For instance, when Bhatla came to him wanting to look for evidence of consciousness in worms, Horvitz blinked but didn’t say no. The science Bhatla proposed was novel, and the question
was intriguing. Bhatla pursued it. But, he says, “It didn’t work.”

So Bhatla went back to the drawing board. During his earlier experiments, he had observed that worms would avoid light, a previously known behavior. But he also noticed that they immediately stopped feeding. The animals had provided a clue. Bhatla went on to discover that worms respond to light by producing hydrogen peroxide, which activates a taste receptor.

In a sense, worms taste light, a wonder of biology no one could have predicted.

Some years ago, the Horvitz lab made t-shirts displaying a quote from the philosopher Friedrich Nietzsche: “You have made your way from worm to man, and much within you is still worm.”
The words have become an informal lab motto, “truer than Nietzsche could everhave imagined,” says Horvitz. “There’s still so much mystery, particularly about the brain, and we are still learning from the worm.”

Study reveals a basis for attention deficits

More than 3 million Americans suffer from attention deficit hyperactivity disorder (ADHD), a condition that usually emerges in childhood and can lead to difficulties at school or work.

A new study from MIT and New York University links ADHD and other attention difficulties to the brain’s thalamic reticular nucleus (TRN), which is responsible for blocking out distracting sensory input. In a study of mice, the researchers discovered that a gene mutation found in some patients with ADHD produces a defect in the TRN that leads to attention impairments.

The findings suggest that drugs boosting TRN activity could improve ADHD symptoms and possibly help treat other disorders that affect attention, including autism.

“Understanding these circuits may help explain the converging mechanisms across these disorders. For autism, schizophrenia, and other neurodevelopmental disorders, it seems like TRN dysfunction may be involved in some patients,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng and Michael Halassa, an assistant professor of psychiatry, neuroscience, and physiology at New York University, are the senior authors of the study, which appears in the March 23 online edition of Nature. The paper’s lead authors are MIT graduate student Michael Wells and NYU postdoc Ralf Wimmer.

Paying attention

Feng, Halassa, and their colleagues set out to study a gene called Ptchd1, whose loss can produce attention deficits, hyperactivity, intellectual disability, aggression, and autism spectrum disorders. Because the gene is carried on the X chromosome, most individuals with these Ptchd1-related effects are male.

In mice, the researchers found that the part of the brain most affected by the loss of Ptchd1 is the TRN, which is a group of inhibitory nerve cells in the thalamus. It essentially acts as a gatekeeper, preventing unnecessary information from being relayed to the brain’s cortex, where higher cognitive functions such as thought and planning occur.

“We receive all kinds of information from different sensory regions, and it all goes into the thalamus,” Feng says. “All this information has to be filtered. Not everything we sense goes through.”

If this gatekeeper is not functioning properly, too much information gets through, allowing the person to become easily distracted or overwhelmed. This can lead to problems with attention and difficulty in learning.

The researchers found that when the Ptchd1 gene was knocked out in mice, the animals showed many of the same behavioral defects seen in human patients, including aggression, hyperactivity, attention deficit, and motor impairments. When the Ptchd1 gene was knocked out only in the TRN, the mice showed only hyperactivity and attention deficits.

Toward new treatments

At the cellular level, the researchers found that the Ptchd1 mutation disrupts channels that carry potassium ions, which prevents TRN neurons from being able to sufficiently inhibit thalamic output to the cortex. The researchers were also able restore the neurons’ normal function with a compound that boosts activity of the potassium channel. This intervention reversed the TRN-related symptoms but not any of the symptoms that appear to be caused by deficits of some other circuit.

“The authors convincingly demonstrate that specific behavioral consequences of the Ptchd1 mutation — attention and sleep — arise from an alteration of a specific protein in a specific brain region, the thalamic reticular nucleus. These findings provide a clear and straightforward pathway from gene to behavior and suggest a pathway toward novel treatments for neurodevelopmental disorders such as autism,” says Joshua Gordon, an associate professor of psychiatry at Columbia University, who was not involved in the research.

Most people with ADHD are now treated with psychostimulants such as Ritalin, which are effective in about 70 percent of patients. Feng and Halassa are now working on identifying genes that are specifically expressed in the TRN in hopes of developing drug targets that would modulate TRN activity. Such drugs may also help patients who don’t have the Ptchd1 mutation, because their symptoms are also likely caused by TRN impairments, Feng says.

The researchers are also investigating when Ptchd1-related problems in the TRN arise and at what point they can be reversed. And, they hope to discover how and where in the brain Ptchd1 mutations produce other abnormalities, such as aggression.

The research was funded by the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Poitras Center for Affective Disorders Research, and the Stanley Center for Psychiatric Research at the Broad Institute.

McGovern neuroscientists reverse autism symptoms

Autism has diverse genetic causes, most of which are still unknown. About 1 percent of people with autism are missing a gene called Shank3, which is critical for brain development. Without this gene, individuals develop typical autism symptoms including repetitive behavior and avoidance of social interactions.

In a study of mice, MIT researchers have now shown that they can reverse some of those behavioral symptoms by turning the gene back on later in life, allowing the brain to properly rewire itself.

“This suggests that even in the adult brain we have profound plasticity to some degree,” says Guoping Feng, an MIT professor of brain and cognitive sciences. “There is more and more evidence showing that some of the defects are indeed reversible, giving hope that we can develop treatment for autistic patients in the future.”

Feng, who is the James W. and Patricia Poitras Professor of Neuroscience and a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute, is the senior author of the study, which appears in the Feb. 17 issue of Nature. The paper’s lead authors are former MIT graduate student Yuan Mei and former Broad Institute visiting graduate student Patricia Monteiro, now at the University of Coimbra in Portugal.

Boosting communication

The Shank3 protein is found in synapses — the connections that allow neurons to communicate with each other. As a scaffold protein, Shank3 helps to organize the hundreds of other proteins that are necessary to coordinate a neuron’s response to incoming signals.

Studying rare cases of defective Shank3 can help scientists gain insight into the neurobiological mechanisms of autism. Missing or defective Shank3 leads to synaptic disruptions that can produce autism-like symptoms in mice, including compulsive behavior, avoidance of social interaction, and anxiety, Feng has previously found. He has also shown that some synapses in these mice, especially in a part of the brain called the striatum, have a greatly reduced density of dendritic spines — small buds on neurons’ surfaces that help with the transmission of synaptic signals.

In the new study, Feng and colleagues genetically engineered mice so that their Shank3 gene was turned off during embryonic development but could be turned back on by adding tamoxifen to the mice’s diet.
When the researchers turned on Shank3 in young adult mice (two to four and a half months after birth), they were able to eliminate the mice’s repetitive behavior and their tendency to avoid social interaction. At the cellular level, the team found that the density of dendritic spines dramatically increased in the striatum of treated mice, demonstrating the structural plasticity in the adult brain.

However, the mice’s anxiety and some motor coordination symptoms did not disappear. Feng suspects that these behaviors probably rely on circuits that were irreversibly formed during early development.
When the researchers turned on Shank3 earlier in life, only 20 days after birth, the mice’s anxiety and motor coordination did improve. The researchers are now working on defining the critical periods for the formation of these circuits, which could help them determine the best time to try to intervene.

“Some circuits are more plastic than others,” Feng says. “Once we understand which circuits control each behavior and understand what exactly changed at the structural level, we can study what leads to these permanent defects, and how we can prevent them from happening.”

Gordon Fishell, a professor of neuroscience at New York University School of Medicine, praises the study’s “elegant approach” and says it represents a major advance in understanding the circuitry and cellular physiology that underlie autism. “The combination of behavior, circuits, physiology, and genetics is state-of-the art,” says Fishell, who was not involved in the research. “Moreover, Dr. Feng’s demonstration that restoration of Shank3 function reverses autism symptoms in adult mice suggests that gene therapy may ultimately prove an effective therapy for this disease.”

Early intervention

For the small population of people with Shank3 mutations, the findings suggest that new genome-editing techniques could in theory be used to repair the defective Shank3 gene and improve these individuals’ symptoms, even later in life. These techniques are not yet ready for use in humans, however.

Feng believes that scientists may also be able to develop more general approaches that would apply to a larger population. For example, if the researchers can identify defective circuits that are specific for certain behavioral abnormalities in some autism patients, and figure out how to modulate those circuits’ activity, that could also help other people who may have defects in the same circuits even though the problem arose from a different genetic mutation.

“That’s why it’s important in the future to identify what subtype of neurons are defective and what genes are expressed in these neurons, so we can use them as a target without affecting the whole brain,” Feng says.

How maternal inflammation might lead to autism-like behavior

In 2010, a large study in Denmark found that women who suffered an infection severe enough to require hospitalization while pregnant were much more likely to have a child with autism (even though the overall risk of delivering a child with autism remained low).

Now research from MIT, the University of Massachusetts Medical School, the University of Colorado, and New York University Langone Medical Center reveals a possible mechanism for how this occurs. In a study of mice, the researchers found that immune cells activated in the mother during severe inflammation produce an immune effector molecule called IL-17 that appears to interfere with brain development.

The researchers also found that blocking this signal could restore normal behavior and brain structure.

“In the mice, we could treat the mother with antibodies that block IL-17 after inflammation had set in, and that could ameliorate some of the behavioral symptoms that were observed in the offspring. However, we don’t know yet how much of that could be translated into humans,” says Gloria Choi, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the lead author of the study, which appears in the Jan. 28 online edition of Science.

Finding the link

In the 2010 study, which included all children born in Denmark between 1980 and 2005, severe infections (requiring hospitalization) that correlated with autism risk included influenza, viral gastroenteritis, and urinary tract infections. Severe viral infections during the first trimester translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.5-fold increase in risk.

Choi and her husband, Jun Huh, were graduate students at Caltech when they first heard about this study during a lecture by Caltech professor emeritus Paul Patterson, who had discovered that an immune signaling molecule called IL-6 plays a role in the link between infection and autism-like behaviors in rodents.

Huh, now an assistant professor at the University of Massachusetts Medical School and one of the paper’s senior authors, was studying immune cells called Th17 cells, which are well known for contributing to autoimmune disorders such as multiple sclerosis, inflammatory bowel diseases, and rheumatoid arthritis. He knew that Th17 cells are activated by IL-6, so he wondered if these cells might also be involved in cases of animal models of autism associated with maternal infection.

“We wanted to find the link,” Choi says. “How do you go all the way from the immune system in the mother to the child’s brain?”

Choi and Huh launched the study as postdocs at Columbia University and New York University School of Medicine, respectively. Working with Dan Littman, a professor of molecular immunology at NYU and one of the paper’s senior authors, they began by injecting pregnant mice with a synthetic analog of double-stranded RNA, which activates the immune system in a similar way to viruses.

Confirming the results of previous studies in mice, the researchers found behavioral abnormalities in the offspring of the infected mothers, including deficits in sociability, repetitive behaviors, and abnormal communication. They then disabled Th17 cells in the mothers before inducing inflammation and found that the offspring mice did not show those behavioral abnormalities. The abnormalities also disappeared when the researchers gave the infected mothers an antibody that blocks IL-17, which is produced by Th17 cells.

The researchers next asked how IL-17 might affect the developing fetus. They found that brain cells in the fetuses of mothers experiencing inflammation express receptors for IL-17, and they believe that exposure to the chemical provokes cells to produce even more receptors for IL-17, amplifying its effects.

In the developing mice, the researchers found irregularities in the normally well-defined layers of cells in the brain’s cortex, where most cognition and sensory processing take place. These patches of irregular structure appeared in approximately the same cortical regions in all of the affected offspring, but they did not occur when the mothers’ Th17 cells were blocked.

Disorganized cortical layers have also been found in studies of human patients with autism.

Preventing autism

The researchers are now investigating whether and how these cortical patches produce the behavioral abnormalities seen in the offspring.

“We’ve shown correlation between these cortical patches and behavioral abnormalities, but we don’t know whether the cortical patches actually are responsible for the behavioral abnormalities,” Choi says. “And if it is responsible, what is being dysregulated within this patch to produce this behavior?”

The researchers hope their work may lead to a way to reduce the chances of autism developing in the children of women who experience severe infections during pregnancy. They also plan to investigate whether genetic makeup influences mice’s susceptibility to maternal inflammation, because autism is known to have a very strong genetic component.

Charles Hoeffer, a professor of integrative physiology at the University of Colorado, is a senior author of the paper, and other authors include MIT postdoc Yeong Yim, NYU graduate student Helen Wong, UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim, and NYU postdoc Sangwon Kim.

Study finds altered brain chemistry in people with autism

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Singing in the brain

Male zebra finches, small songbirds native to central Australia, learn their songs by copying what they hear from their fathers. These songs, often used as mating calls, develop early in life as juvenile birds experiment with mimicking the sounds they hear.

MIT neuroscientists have now uncovered the brain activity that supports this learning process. Sequences of neural activity that encode the birds’ first song syllable are duplicated and altered slightly, allowing the birds to produce several variations on the original syllable. Eventually these syllables are strung together into the bird’s signature song, which remains constant for life.

“The advantage here is that in order to learn new syllables, you don’t have to learn them from scratch. You can reuse what you’ve learned and modify it slightly. We think it’s an efficient way to learn various types of syllables,” says Tatsuo Okubo, a former MIT graduate student and lead author of the study, which appears in the Nov. 30 online edition of Nature.

Okubo and his colleagues believe that this type of neural sequence duplication may also underlie other types of motor learning. For example, the sequence used to swing a tennis racket might be repurposed for a similar motion such as playing Ping-Pong. “This seems like a way that sequences might be learned and reused for anything that involves timing,” says Emily Mackevicius, an MIT graduate student who is also an author of the paper.

The paper’s senior author is Michale Fee, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Bursting into song

Previous studies from Fee’s lab have found that a part of the brain’s cortex known as the HVC is critical for song production.

Typically, each song lasts for about one second and consists of multiple syllables. Fee’s lab has found that in adult birds, individual HVC neurons show a very brief burst of activity — about 10 milliseconds or less — at one moment during the song. Different sets of neurons are active at different times, and collectively the song is represented by this sequence of bursts.

In the new Nature study, the researchers wanted to figure out how those neural patterns develop in newly hatched zebra finches. To do that, they recorded electrical activity in HVC neurons for up to three months after the birds hatched.

When zebra finches begin to sing, about 30 days after hatching, they produce only nonsense syllables known as subsong, similar to the babble of human babies. At first, the duration of these syllables is highly variable, but after a week or so they turn into more consistent sounds called protosyllables, which last about 100 milliseconds. Each bird learns one protosyllable that forms a scaffold for subsequent syllables.

The researchers found that within the HVC, neurons fire in a sequence of short bursts corresponding to the first protosyllable that each bird learns. Most of the neurons in the HVC participate in this original sequence, but as time goes by, some of these neurons are extracted from the original sequence and produce a new, very similar sequence. This chain of neural sequences can be repurposed to produce different syllables.

“From that short sequence it splits into new sequences for the next new syllables,” Mackevicius says. “It starts with that short chain that has a lot of redundancy in it, and splits off some neurons for syllable A and some neurons for syllable B.”

This splitting of neural sequences happens repeatedly until the birds can produce between three and seven different syllables, the researchers found. This entire process takes about two months, at which point each bird has settled on its final song.

Evolution by duplication

The researchers note that this process is similar to what is believed to drive the production of new genes and traits during evolution.

“If you duplicate a gene, then you could have separate mutations in both copies of the gene and they could eventually do different functions,” Okubo says. “It’s similar with motor programs. You can duplicate the sequence and then independently modify the two daughter motor programs so that they can now each do slightly different things.”

Mackevicius is now studying how input from sound-processing parts of the brain to the HVC contributes to the formation of these neural sequences.