Distinctive brain pattern helps habits form

Our daily lives include hundreds of routine habits. Brushing our teeth, driving to work, or putting away the dishes are just a few of the tasks that our brains have automated to the point that we hardly need to think about them.

Although we may think of each of these routines as a single task, they are usually made up of many smaller actions, such as picking up our toothbrush, squeezing toothpaste onto it, and then lifting the brush to our mouth. This process of grouping behaviors together into a single routine is known as “chunking,” but little is known about how the brain groups these behaviors together.

MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior. These neurons, located in a brain region highly involved in habit formation, fire at the outset of a learned routine, go quiet while it is carried out, then fire again once the routine has ended.

This task-bracketing appears to be important for initiating a routine and then notifying the brain once it is complete, says Ann Graybiel, an Institute Professor at MIT, a member of the McGovern Institute for Brain Research, and the senior author of the study.

Nuné Martiros, a recent MIT PhD recipient who is now a postdoc at Harvard University, is the lead author of the paper, which appears in the Feb. 8 issue of Current Biology. Alexandra Burgess, a recent MIT graduate and technical associate at the McGovern Institute, is also an author of the paper.

Routine activation

Graybiel has previously shown that a part of the brain called the striatum, which is found in the basal ganglia, plays a major role in habit formation. Several years ago, she and her group found that neuron firing patterns in the striatum change as animals learn a new habit, such as turning to the right or left in a maze upon hearing a certain tone.

When the animal is just starting to learn the maze, these neurons fire continuously throughout the task. However, as the animal becomes better at making the correct turn to receive a reward, the firing becomes clustered at the very beginning of the task and at the very end. Once these patterns form, it becomes extremely difficult to break the habit.

However, these previous studies did not rule out other explanations for the pattern, including the possibility that it might be related to the motor commands required for the maze-running behavior. In the new study, Martiros and Graybiel set out to determine whether this firing pattern could be conclusively linked with the chunking of habitual behavior.

The researchers trained rats to press two levers in a particular sequence, for example, 1-2-2 or 2-1-2. The rats had to figure out what the correct sequence was, and if they did, they received a chocolate milk reward. It took several weeks for them to learn the task, and as they became more accurate, the researchers saw the same beginning-and-end firing patterns develop in the striatum that they had seen in their previous habit studies.

Because each rat learned a different sequence, the researchers could rule out the possibility that the patterns correspond to the motor input required to preform a particular series of movements. This offers strong evidence that the firing pattern corresponds specifically to the initiation and termination of a learned routine, the researchers say.

“I think this more or less proves that the development of bracketing patterns serves to package up a behavior that the brain — and the animals — consider valuable and worth keeping in their repertoire. It really is a high-level signal that helps to release that habit, and we think the end signal says the routine has been done,” Graybiel says.

Distinctive patterns

The researchers also discovered a distinct pattern in a set of inhibitory neurons in the striatum. Activity in these neurons, known as interneurons, displayed a strong inverse relationship with the activity of the excitatory neurons that produce the bracketing pattern.

“The interneurons were activated during the time when the rats were in the middle of performing the learned sequence, and could possibly be preventing the principal neurons from initiating another routine until the current one was finished. The discovery of this opposite activity by the interneurons also gets us one step closer to understanding how brain circuits can actually produce this pattern of activity,” Martiros says.

Graybiel’s lab is now investigating further how the interaction between these two groups of neurons helps to encode habitual behavior in the striatum.

The research was funded by the National Institutes of Health/National Institute of Mental Health, the Office of Naval Research, and a McGovern Institute Mark Gorenberg Fellowship.

Ultrathin needle can deliver drugs directly to the brain

MIT researchers have devised a miniaturized system that can deliver tiny quantities of medicine to brain regions as small as 1 cubic millimeter. This type of targeted dosing could make it possible to treat diseases that affect very specific brain circuits, without interfering with the normal function of the rest of the brain, the researchers say.

Using this device, which consists of several tubes contained within a needle about as thin as a human hair, the researchers can deliver one or more drugs deep within the brain, with very precise control over how much drug is given and where it goes. In a study of rats, they found that they could deliver targeted doses of a drug that affects the animals’ motor function.

“We can infuse very small amounts of multiple drugs compared to what we can do intravenously or orally, and also manipulate behavioral changes through drug infusion,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper, which appears in the Jan. 24 issue of Science Translational Medicine.

“We believe this tiny microfabricated device could have tremendous impact in understanding brain diseases, as well as providing new ways of delivering biopharmaceuticals and performing biosensing in the brain,” says Robert Langer, the David H. Koch Institute Professor at MIT and one of the paper’s senior authors.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, is also a senior author of the paper.

Targeted action

Drugs used to treat brain disorders often interact with brain chemicals called neurotransmitters or the cell receptors that interact with neurotransmitters. Examples include l-dopa, a dopamine precursor used to treat Parkinson’s disease, and Prozac, used to boost serotonin levels in patients with depression. However, these drugs can have side effects because they act throughout the brain.

“One of the problems with central nervous system drugs is that they’re not specific, and if you’re taking them orally they go everywhere. The only way we can limit the exposure is to just deliver to a cubic millimeter of the brain, and in order to do that, you have to have extremely small cannulas,” Cima says.

The MIT team set out to develop a miniaturized cannula (a thin tube used to deliver medicine) that could target very small areas. Using microfabrication techniques, the researchers constructed tubes with diameters of about 30 micrometers and lengths up to 10 centimeters. These tubes are contained within a stainless steel needle with a diameter of about 150 microns. “The device is very stable and robust, and you can place it anywhere that you are interested,” Dagdeviren says.

The researchers connected the cannulas to small pumps that can be implanted under the skin. Using these pumps, the researchers showed that they could deliver tiny doses (hundreds of nanoliters) into the brains of rats. In one experiment, they delivered a drug called muscimol to a brain region called the substantia nigra, which is located deep within the brain and helps to control movement.

Previous studies have shown that muscimol induces symptoms similar to those seen in Parkinson’s disease. The researchers were able to generate those effects, which include stimulating the rats to continually turn in a clockwise direction, using their miniaturized delivery needle. They also showed that they could halt the Parkinsonian behavior by delivering a dose of saline through a different channel, to wash the drug away.

“Since the device can be customizable, in the future we can have different channels for different chemicals, or for light, to target tumors or neurological disorders such as Parkinson’s disease or Alzheimer’s,” Dagdeviren says.

This device could also make it easier to deliver potential new treatments for behavioral neurological disorders such as addiction or obsessive compulsive disorder, which may be caused by specific disruptions in how different parts of the brain communicate with each other.

“Even if scientists and clinicians can identify a therapeutic molecule to treat neural disorders, there remains the formidable problem of how to delivery the therapy to the right cells — those most affected in the disorder. Because the brain is so structurally complex, new accurate ways to deliver drugs or related therapeutic agents locally are urgently needed,” says Ann Graybiel, an MIT Institute Professor and a member of MIT’s McGovern Institute for Brain Research, who is also an author of the paper.

Measuring drug response

The researchers also showed that they could incorporate an electrode into the tip of the cannula, which can be used to monitor how neurons’ electrical activity changes after drug treatment. They are now working on adapting the device so it can also be used to measure chemical or mechanical changes that occur in the brain following drug treatment.

The cannulas can be fabricated in nearly any length or thickness, making it possible to adapt them for use in brains of different sizes, including the human brain, the researchers say.

“This study provides proof-of-concept experiments, in large animal models, that a small, miniaturized device can be safely implanted in the brain and provide miniaturized control of the electrical activity and function of single neurons or small groups of neurons. The impact of this could be significant in focal diseases of the brain, such as Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and chairman of the Department of Neurosurgery at Brigham and Women’s Hospital, who was not involved in the research.

The research was funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.

How the brain keeps time

Timing is critical for playing a musical instrument, swinging a baseball bat, and many other activities. Neuroscientists have come up with several models of how the brain achieves its exquisite control over timing, the most prominent being that there is a centralized clock, or pacemaker, somewhere in the brain that keeps time for the entire brain.

However, a new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.

“What we found is that it’s a very active process. The brain is not passively waiting for a clock to reach a particular point,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT postdoc Jing Wang and former postdoc Devika Narain are the lead authors of the paper, which appears in the Dec. 4 issue of Nature Neuroscience. Graduate student Eghbal Hosseini is also an author of the paper.

Flexible control

One of the earliest models of timing control, known as the clock accumulator model, suggested that the brain has an internal clock or pacemaker that keeps time for the rest of the brain. A later variation of this model suggested that instead of using a central pacemaker, the brain measures time by tracking the synchronization between different brain wave frequencies.

Although these clock models are intuitively appealing, Jazayeri says, “they don’t match well with what the brain does.”

No one has found evidence for a centralized clock, and Jazayeri and others wondered if parts of the brain that control behaviors that require precise timing might perform the timing function themselves. “People now question why would the brain want to spend the time and energy to generate a clock when it’s not always needed. For certain behaviors you need to do timing, so perhaps the parts of the brain that subserve these functions can also do timing,” he says.

To explore this possibility, the researchers recorded neuron activity from three brain regions in animals as they performed a task at two different time intervals — 850 milliseconds or 1,500 milliseconds.

The researchers found a complicated pattern of neural activity during these intervals. Some neurons fired faster, some fired slower, and some that had been oscillating began to oscillate faster or slower. However, the researchers’ key discovery was that no matter the neurons’ response, the rate at which they adjusted their activity depended on the time interval required.

At any point in time, a collection of neurons is in a particular “neural state,” which changes over time as each individual neuron alters its activity in a different way. To execute a particular behavior, the entire system must reach a defined end state. The researchers found that the neurons always traveled the same trajectory from their initial state to this end state, no matter the interval. The only thing that changed was the rate at which the neurons traveled this trajectory.

When the interval required was longer, this trajectory was “stretched,” meaning the neurons took more time to evolve to the final state. When the interval was shorter, the trajectory was compressed.

“What we found is that the brain doesn’t change the trajectory when the interval changes, it just changes the speed with which it goes from the initial internal state to the final state,” Jazayeri says.

Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles, says that the study “provides beautiful evidence that timing is a distributed process in the brain — that is, there is no single master clock.”

“This work also supports the notion that the brain does not tell time using a clock-like mechanism, but rather relies on the dynamics inherent to neural circuits, and that as these dynamics increase and decrease in speed, animals move more quickly or slowly,” adds Buonomano, who was not involved in the research.

Neural networks

The researchers focused their study on a brain loop that connects three regions: the dorsomedial frontal cortex, the caudate, and the thalamus. They found this distinctive neural pattern in the dorsomedial frontal cortex, which is involved in many cognitive processes, and the caudate, which is involved in motor control, inhibition, and some types of learning. However, in the thalamus, which relays motor and sensory signals, they found a different pattern: Instead of altering the speed of their trajectory, many of the neurons simply increased or decreased their firing rate, depending on the interval required.

Jazayeri says this finding is consistent with the possibility that the thalamus is instructing the cortex on how to adjust its activity to generate a certain interval.

The researchers also created a computer model to help them further understand this phenomenon. They began with a model of hundreds of neurons connected together in random ways, and then trained it to perform the same interval-producing task they had used to train animals, offering no guidance on how the model should perform the task.

They found that these neural networks ended up using the same strategy that they observed in the animal brain data. A key discovery was that this strategy only works if some of the neurons have nonlinear activity — that is, the strength of their output doesn’t constantly increase as their input increases. Instead, as they receive more input, their output increases at a slower rate.

Jazayeri now hopes to explore further how the brain generates the neural patterns seen during varying time intervals, and also how our expectations influence our ability to produce different intervals.

The research was funded by the Rubicon Grant from the Netherlands Scientific Organization, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

Stress can lead to risky decisions

Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.

MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.

The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.

“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.

Graybiel is the senior author of the paper, which appears in Cell on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.

Hard decisions

In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.

In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.

Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.

However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.

“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.

The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.

“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the Cell paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.

Circuit dynamics

The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.

When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.

“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”

Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.

“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.

The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.

Next-generation optogenetic molecules control single neurons

Researchers at MIT and Paris Descartes University have developed a new optogenetic technique that sculpts light to target individual cells bearing engineered light-sensitive molecules, so that individual neurons can be precisely stimulated.

Until now, it has been challenging to use optogenetics to target single cells with such precise control over both the timing and location of the activation. This new advance paves the way for studies of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.

“Ideally what you would like to do is play the brain like a piano. You would want to control neurons independently, rather than having them all march in lockstep the way traditional optogenetics works, but which normally the brain doesn’t do,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

The new technique relies on a new type of light-sensitive protein that can be embedded in neuron cell bodies, combined with holographic light-shaping that can focus light on a single cell.

Boyden and Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, are the senior authors of the study, which appears in the Nov. 13 issue of Nature Neuroscience. The lead authors are MIT postdoc Or Shemesh and CNRS postdocs Dimitrii Tanese and Valeria Zampini.

Precise control

More than 10 years ago, Boyden and his collaborators first pioneered the use of light-sensitive proteins known as microbial opsins to manipulate neuron electrical activity. These opsins can be embedded into the membranes of neurons, and when they are exposed to certain wavelengths of light, they silence or stimulate the cells.

Over the past decade, scientists have used this technique to study how populations of neurons behave during brain tasks such as memory recall or habit formation. Traditionally, many cells are targeted simultaneously because the light shining into the brain strikes a relatively large area. However, as Boyden points out, neurons may have different functions even when they are near each other.

“Two adjacent cells can have completely different neural codes. They can do completely different things, respond to different stimuli, and play different activity patterns during different tasks,” he says.

To achieve independent control of single cells, the researchers combined two new advances: a localized, more powerful opsin and an optimized holographic light-shaping microscope.

For the opsin, the researchers used a protein called CoChR, which the Boyden lab discovered in 2014. They chose this molecule because it generates a very strong electric current in response to light (about 10 times stronger than that produced by channelrhodopsin-2, the first protein used for optogenetics).

They fused CoChR to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body. This helps to prevent crosstalk between neurons, since light that activates one neuron can also strike axons and dendrites of other neurons that intertwine with the target neuron.

Boyden then worked with Emiliani to combine this approach with a light-stimulation technique that she had previously developed, known as two-photon computer-generated holography (CGH). This can be used to create three-dimensional sculptures of light that envelop a target cell.

Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer generated holography, the interferogram is calculated by a computer without the need of any original object. Years ago, Emiliani’s research group demonstrated that combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.

In the new study, by combining this approach with new opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” Boyden says.

Mapping connections

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This paves the way for possible diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

Kirill Volynski, a professor at the Institute of Neurology at University College London, who was not involved in the research, plans to use the new technology in his studies of diseases caused by mutations of proteins involved in synaptic communication between neurons.

“This gives us a very nice tool to study those mutations and those disorders,” Volynski says. “We expect this to enable a major improvement in the specificity of stimulating neurons that have mutated synaptic proteins.”

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

Studies help explain link between autism, severe infection during pregnancy

Mothers who experience an infection severe enough to require hospitalization during pregnancy are at higher risk of having a child with autism. Two new studies from MIT and the University of Massachusetts Medical School shed more light on this phenomenon and identify possible approaches to preventing it.

In research on mice, the researchers found that the composition of bacterial populations in the mother’s digestive tract can influence whether maternal infection leads to autistic-like behaviors in offspring. They also discovered the specific brain changes that produce these behaviors.

“We identified a very discrete brain region that seems to be modulating all the behaviors associated with this particular model of neurodevelopmental disorder,” says Gloria Choi, the Samuel A. Goldblith Career Development Assistant Professor of Brain and Cognitive Sciences and a member of MIT’s McGovern Institute for Brain Research.

If further validated in human studies, the findings could offer a possible way to reduce the risk of autism, which would involve blocking the function of certain strains of bacteria found in the maternal gut, the researchers say.

Choi and Jun Huh, formerly an assistant professor at UMass Medical School who is now a faculty member at Harvard Medical School, are the senior authors of both papers, which appear in Nature on Sept. 13. MIT postdoc Yeong Shin Yim is the first author of one paper, and UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim are the lead authors of the other.

Reversing symptoms

A 2010 study that included all children born in Denmark between 1980 and 2005 found that severe viral infections during the first trimester of pregnancy translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.42-fold increase in risk. These infections included influenza, viral gastroenteritis, and severe urinary tract infections.

Similar effects have been described in mouse models of maternal inflammation, and in a 2016 Science paper, Choi and Huh found that a type of immune cells known as Th17 cells, and their effector molecule, called IL-17, are responsible for this effect in mice. IL-17 then interacts with receptors found on brain cells in the developing fetus, leading to irregularities that the researchers call “patches” in certain parts of the cortex.

In one of the new papers, the researchers set out to learn more about these patches and to determine if they were responsible for the behavioral abnormalities seen in those mice, which include repetitive behavior and impaired sociability.

The researchers found that the patches are most common in a part of the brain known as S1DZ. Part of the somatosensory cortex, this region is believed to be responsible for proprioception, or sensing where the body is in space. In these patches, populations of cells called interneurons, which express a protein called parvalbumin, are reduced. Interneurons are responsible for controlling the balance of excitation and inhibition in the brain, and the researchers found that the changes they found in the cortical patches were associated with overexcitement in S1DZ.

When the researchers restored normal levels of brain activity in this area, they were able to reverse the behavioral abnormalities. They were also able to induce the behaviors in otherwise normal mice by overstimulating neurons in S1DZ.

The researchers also discovered that S1DZ sends messages to two other brain regions: the temporal association area of the cortex and the striatum. When the researchers inhibited the neurons connected to the temporal association area, they were able to reverse the sociability deficits. When they inhibited the neurons connected to the striatum, they were able to halt the repetitive behaviors.

Microbial factors

In the second Nature paper, the researchers delved into some of the additional factors that influence whether or not a severe infection leads to autism. Not all mothers who experience severe infection end up having child with autism, and similarly not all the mice in the maternal inflammation model develop behavioral abnormalities.

“This suggests that inflammation during pregnancy is just one of the factors. It needs to work with additional factors to lead all the way to that outcome,” Choi says.

A key clue was that when immune systems in some of the pregnant mice were stimulated, they began producing IL-17 within a day. “Normally it takes three to five days, because IL-17 is produced by specialized immune cells and they require time to differentiate,” Huh says. “We thought that perhaps this cytokine is being produced not from differentiating immune cells, but rather from pre-existing immune cells.”

Previous studies in mice and humans have found populations of Th17 cells in the intestines of healthy individuals. These cells, which help to protect the host from harmful microbes, are thought to be produced after exposure to particular types of harmless bacteria that associate with the epithelium.

The researchers found that only the offspring of mice with one specific type of harmless bacteria, known as segmented filamentous bacteria, had behavioral abnormalities and cortical patches. When the researchers killed those bacteria with antibiotics, the mice produced normal offspring.

“This data strongly suggests that perhaps certain mothers who happen to carry these types of Th17 cell-inducing bacteria in their gut may be susceptible to this inflammation-induced condition,” Huh says.

Humans can also carry strains of gut bacteria known to drive production of Th17 cells, and the researchers plan to investigate whether the presence of these bacteria is associated with autism.

Sarah Gaffen, a professor of rheumatology and clinical immunology at the University of Pittsburgh, says the study clearly demonstrates the link between IL-17 and the neurological effects seen in the mice offspring. “It’s rare for things to fit into such a clear model, where you can identify a single molecule that does what you predicted,” says Gaffen, who was not involved in the study.

The research was funded by the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Howard Hughes Medical Institute, Robert Buxton, the National Research Foundation of Korea, the Searle Scholars Program, a Pew Scholarship for Biomedical Sciences, the Kenneth Rainin Foundation, the National Institutes of Health, and the Hock E. Tan and K. Lisa Yang Center for Autism Research.

A noninvasive method for deep brain stimulation

Delivering an electrical current to a part of the brain involved in movement control has proven successful in treating many Parkinson’s disease patients. This approach, known as deep brain stimulation, requires implanting electrodes in the brain — a complex procedure that carries some risk to the patient.

Now, MIT researchers, collaborating with investigators at Beth Israel Deaconess Medical Center (BIDMC) and the IT’IS Foundation, have come up with a way to stimulate regions deep within the brain using electrodes placed on the scalp. This approach could make deep brain stimulation noninvasive, less risky, less expensive, and more accessible to patients.

“Traditional deep brain stimulation requires opening the skull and implanting an electrode, which can have complications. Secondly, only a small number of people can do this kind of neurosurgery,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1 issue of Cell.

Doctors also use deep brain stimulation to treat some patients with obsessive compulsive disorder, epilepsy, and depression, and are exploring the possibility of using it to treat other conditions such as autism. The new, noninvasive approach could make it easier to adapt deep brain stimulation to treat additional disorders, the researchers say.

“With the ability to stimulate brain structures noninvasively, we hope that we may help discover new targets for treating brain disorders,” says the paper’s lead author, Nir Grossman, a former Wellcome Trust-MIT postdoc working at MIT and BIDMC, who is now a research fellow at Imperial College London.

Deep locations

Electrodes for treating Parkinson’s disease are usually placed in the subthalamic nucleus, a lens-shaped structure located below the thalamus, deep within the brain. For many Parkinson’s patients, delivering electrical impulses in this brain region can improve symptoms, but the surgery to implant the electrodes carries risks, including brain hemorrhage and infection.

Other researchers have tried to noninvasively stimulate the brain using techniques such as transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression. Since TMS is noninvasive, it has also been used in normal human subjects to study the basic science of cognition, emotion, sensation, and movement. However, using TMS to stimulate deep brain structures can also result in surface regions being strongly stimulated, resulting in modulation of multiple brain networks.

The MIT team devised a way to deliver electrical stimulation deep within the brain, via electrodes placed on the scalp, by taking advantage of a phenomenon known as temporal interference.

This strategy requires generating two high-frequency electrical currents using electrodes placed outside the brain. These fields are too fast to drive neurons. However, these currents interfere with one another in such a way that where they intersect, deep in the brain, a small region of low-frequency current is generated inside neurons. This low-frequency current can be used to drive neurons’ electrical activity, while the high-frequency current passes through surrounding tissue with no effect.

By tuning the frequency of these currents and changing the number and location of the electrodes, the researchers can control the size and location of the brain tissue that receives the low-frequency stimulation. They can target locations deep within the brain without affecting any of the surrounding brain structures. They can also steer the location of stimulation, without moving the electrodes, by altering the currents. In this way, deep targets could be stimulated, both for therapeutic use and basic science investigations.

“You can go for deep targets and spare the overlying neurons, although the spatial resolution is not yet as good as that of deep brain stimulation,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Targeted stimulation

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. They were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai, who is an author of the paper. “I think it’s very exciting because Parkinson’s disease and other movement disorders seem to originate from a very particular region of the brain, and if you can target that, you have the potential to reverse it.”

Significantly, in the hippocampus experiments, the technique did not activate the neurons in the cortex, the region lying between the electrodes on the skull and the target deep inside the brain. The researchers also found no harmful effects in any part of the brain.

Last year, Tsai showed that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

Other authors of the paper are MIT research scientist David Bono; former MIT postdocs Suhasa Kodandaramaiah and Andrii Rudenko; MIT postdoc Nina Dedic; MIT grad student Ho-Jun Suk; Beth Israel Deaconess Medical Center and Harvard Medical School Professor Alvaro Pascual-Leone; and IT’IS Foundation researchers Antonino Cassara, Esra Neufeld, and Niels Kuster.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

Making brain implants smaller could prolong their lifespan

Many diseases, including Parkinson’s disease, can be treated with electrical stimulation from an electrode implanted in the brain. However, the electrodes can produce scarring, which diminishes their effectiveness and can necessitate additional surgeries to replace them.

MIT researchers have now demonstrated that making these electrodes much smaller can essentially eliminate this scarring, potentially allowing the devices to remain in the brain for much longer.

“What we’re doing is changing the scale and making the procedure less invasive,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study, which appears in the May 16 issue of Scientific Reports.

Cima and his colleagues are now designing brain implants that can not only deliver electrical stimulation but also record brain activity or deliver drugs to very targeted locations.

The paper’s lead author is former MIT graduate student Kevin Spencer. Other authors are former postdoc Jay Sy, graduate student Khalil Ramadi, Institute Professor Ann Graybiel, and David H. Koch Institute Professor Robert Langer.

Effects of size

Many Parkinson’s patients have benefited from treatment with low-frequency electrical current delivered to a part of the brain involved in movement control. The electrodes used for this deep brain stimulation are a few millimeters in diameter. After being implanted, they gradually generate scar tissue through the constant rubbing of the electrode against the surrounding brain tissue. This process, known as gliosis, contributes to the high failure rate of such devices: About half stop working within the first six months.

Previous studies have suggested that making the implants smaller or softer could reduce the amount of scarring, so the MIT team set out to measure the effects of both reducing the size of the implants and coating them with a soft polyethylene glycol (PEG) hydrogel.

The hydrogel coating was designed to have an elasticity very similar to that of the brain. The researchers could also control the thickness of the coating. They found that when coated electrodes were pushed into the brain, the soft coating would fall off, so they devised a way to apply the hydrogel and then dry it, so that it becomes a hard, thin film. After the electrode is inserted, the film soaks up water and becomes soft again.

In mice, the researchers tested both coated and uncoated glass fibers with varying diameters and found that there is a tradeoff between size and softness. Coated fibers produced much less scarring than uncoated fibers of the same diameter. However, as the electrode fibers became smaller, down to about 30 microns (0.03 millimeters) in diameter, the uncoated versions produced less scarring, because the coatings increase the diameter.

This suggests that a 30-micron, uncoated fiber is the optimal design for implantable devices in the brain.

“Before this paper, no one really knew the effects of size,” Cima says. “Softer is better, but not if it makes the electrode larger.”

New devices

The question now is whether fibers that are only 30 microns in diameter can be adapted for electrical stimulation, drug delivery, and recording electrical activity in the brain. Cima and his colleagues have had some initial success developing such devices.

“It’s one of those things that at first glance seems impossible. If you have 30-micron glass fibers, that’s slightly thicker than a piece of hair. But it is possible to do,” Cima says.
Such devices could be potentially useful for treating Parkinson’s disease or other neurological disorders. They could also be used to remove fluid from the brain to monitor whether treatments are having the intended effect, or to measure brain activity that might indicate when an epileptic seizure is about to occur.

The research was funded by the National Institutes of Health and MIT’s Institute for Soldier Nanotechnologies.

Precise technique tracks dopamine in the brain

MIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

“Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

“Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

Studying the striatum

For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

“What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

How learning happens

The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

“We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

“What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”

Other authors of the paper are McGovern Institute research scientists Minjung Kim, Satoko Amemori, and Hideki Shimazu; McGovern Institute postdoc Daigo Homma; McGovern Institute technical associate Tomoko Yoshida; and undergraduates Harshita Yerramreddy and Ekin Karasan.

The research was funded by the National Institutes of Health, the National Institute of Biomedical Imaging and Bioengineering, and the National Institute of Neurological Disorders and Stroke.

Researchers create synthetic cells to isolate genetic circuits

Synthetic biology allows scientists to design genetic circuits that can be placed in cells, giving them new functions such as producing drugs or other useful molecules. However, as these circuits become more complex, the genetic components can interfere with each other, making it difficult to achieve more complicated functions.

MIT researchers have now demonstrated that these circuits can be isolated within individual synthetic “cells,” preventing them from disrupting each other. The researchers can also control communication between these cells, allowing for circuits or their products to be combined at specific times.

“It’s a way of having the power of multicomponent genetic cascades, along with the ability to build walls between them so they won’t have cross-talk. They won’t interfere with each other in the way they would if they were all put into a single cell or into a beaker,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. Boyden is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

This approach could allow researchers to design circuits that manufacture complex products or act as sensors that respond to changes in their environment, among other applications.

Boyden is the senior author of a paper describing this technique in the Nov. 14 issue of Nature Chemistry. The paper’s lead authors are former MIT postdoc Kate Adamala, who is now an assistant professor at the University of Minnesota, and former MIT grad student Daniel Martin-Alarcon. Katriona Guthrie-Honea, a former MIT research assistant, is also an author of the paper.

Circuit control

The MIT team encapsulated their genetic circuits in droplets known as liposomes, which have a fatty membrane similar to cell membranes. These synthetic cells are not alive but are equipped with much of the cellular machinery necessary to read DNA and manufacture proteins.

By segregating circuits within their own liposomes, the researchers are able to create separate circuit subroutines that could not run in the same container at the same time, but can run in parallel to each other, communicating in controlled ways. This approach also allows scientists to repurpose the same genetic tools, including genes and transcription factors (proteins that turn genes on or off), to do different tasks within a network.

“If you separate circuits into two different liposomes, you could have one tool doing one job in one liposome, and the same tool doing a different job in the other liposome,” Martin-Alarcon says. “It expands the number of things that you can do with the same building blocks.”

This approach also enables communication between circuits from different types of organisms, such as bacteria and mammals.

As a demonstration, the researchers created a circuit that uses bacterial genetic parts to respond to a molecule known as theophylline, a drug similar to caffeine. When this molecule is present, it triggers another molecule known as doxycycline to leave the liposome and enter another set of liposomes containing a mammalian genetic circuit. In those liposomes, doxycycline activates a genetic cascade that produces luciferase, a protein that generates light.

Using a modified version of this approach, scientists could create circuits that work together to produce biological therapeutics such as antibodies, after sensing a particular molecule emitted by a brain cell or other cell.

“If you think of the bacterial circuit as encoding a computer program, and the mammalian circuit is encoding the factory, you could combine the computer code of the bacterial circuit and the factory of the mammalian circuit into a unique hybrid system,” Boyden says.

The researchers also designed liposomes that can fuse with each other in a controlled way. To do that, they programmed the cells with proteins called SNAREs, which insert themselves into the cell membrane. There, they bind to corresponding SNAREs found on surfaces of other liposomes, causing the synthetic cells to fuse. The timing of this fusion can be controlled to bring together liposomes that produce different molecules. When the cells fuse, these molecules are combined to generate a final product.

More modularity

The researchers believe this approach could be used for nearly any application that synthetic biologists are already working on. It could also allow scientists to pursue potentially useful applications that have been tried before but abandoned because the genetic circuits interfered with each other too much.

“The way that we wrote this paper was not oriented toward just one application,” Boyden says. “The basic question is: Can you make these circuits more modular? If you have everything mishmashed together in the cell, but you find out that the circuits are incompatible or toxic, then putting walls between those reactions and giving them the ability to communicate with each other could be very useful.”

Vincent Noireaux, an associate professor of physics at the University of Minnesota, described the MIT approach as “a rather novel method to learn how biological systems work.”

“Using cell-free expression has several advantages: Technically the work is reduced to cloning (nowadays fast and easy), we can link information processing to biological function like living cells do, and we work in isolation with no other gene expression occurring in the background,” says Noireaux, who was not involved in the research.

Another possible application for this approach is to help scientists explore how the earliest cells may have evolved billions of years ago. By engineering simple circuits into liposomes, researchers could study how cells might have evolved the ability to sense their environment, respond to stimuli, and reproduce.

“This system can be used to model the behavior and properties of the earliest organisms on Earth, as well as help establish the physical boundaries of Earth-type life for the search of life elsewhere in the solar system and beyond,” Adamala says.