Chronic neural implants modulate microstructures in the brain with pinpoint accuracy

Post by Windy Pham

The diversity of structures and functions of the brain is becoming increasingly realized in research today. Key structures exist in the brain that regulate emotion, anxiety, happiness, memory, and mobility. These structures can come in a huge variety of shapes and sizes and can all be physically near one another. Dysfunction of these structures and circuits linking them are common causes of many neurologic and neuropsychiatric diseases. For example, the substantia nigra is only a few millimeters in size yet is crucial for movement and coordination. Destruction of substantia nigra neurons is what causes motor symptoms in Parkinson’s disease.

New technologies such as optogenetics have allowed us to identify similar microstructures in the brain. However, these techniques rely on liquid infusions into the brain, which prepare the regions to be studied to respond to light. These infusions are done with large needles, which do not have the fine control to target specific regions. Clinical therapy has also lagged behind. New drug therapies aimed at treating these conditions are delivered orally, which results in drug distribution throughout the brain, or through large needle-cannulas, which do not have the fine control to accurately dose specific regions. As a result, patients of neurologic and psychiatric disorders frequently fail to respond to therapies due to poor drug delivery to diseased regions.

A new study addressing this problem has been published in Proceedings of the National Academy of Sciences. The lead author is Khalil Ramadi, a medical engineering and medical physics (MEMP) PhD candidate in the Harvard-MIT Program in Health Sciences and Technology (HST). For this study, Khalil and his thesis advisor, Michael Cima, the David H. Koch Professor of Engineering within the Department of Materials Science and Engineering and the Koch Institute for Integrative Cancer Research, and associate dean of innovation in the School of Engineering, collaborated with Institute Professors Robert Langer and Ann Graybiel, an Investigator at the McGovern Institute of Brain Research to tackle this issue.

The team developed tools to enable targeted delivery of nanoliters of drugs to deep brain structures through chronically implanted microprobes. They also developed nuclear imaging techniques using positron emission tomography (PET) to measure the volume of the brain region targeted with each infusion. “Drugs for disorders of the central nervous system are nonspecific and get distributed throughout the brain,” Cima says. “Our animal studies show that volume is a critical factor when delivering drugs to the brain, as important as the total dose delivered. Using microcannulas and microPET imaging, we can control the area of brain exposed to these drugs, improving targeting accuracy double time comparing to the traditional methods used today.”

The researchers were also able to design cannulas that are MRI-compatible and implanted up to one year in rats. Implanting these cannulas with micropumps allowed the researchers to remotely control the behavior of animals. Significantly, they found that varying the volume infused alone had a profound effect on behavior induced, even if the total drug dose delivered stayed constant. These results show that regulation of volume delivery to brain region is extremely important in influencing brain activity. This technology could potentially enable precise investigation of neurological disease pathology in preclinical models, and more effective treatment in human patients.

 

 

How the brain performs flexible computations

Humans can perform a vast array of mental operations and adjust their behavioral responses based on external instructions and internal beliefs. For example, to tap your feet to a musical beat, your brain has to process the incoming sound and also use your internal knowledge of how the song goes.

MIT neuroscientists have now identified a strategy that the brain uses to rapidly select and flexibly perform different mental operations. To make this discovery, they applied a mathematical framework known as dynamical systems analysis to understand the logic that governs the evolution of neural activity across large populations of neurons.

“The brain can combine internal and external cues to perform novel computations on the fly,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “What makes this remarkable is that we can make adjustments to our behavior at a much faster time scale than the brain’s hardware can change. As it turns out, the same hardware can assume many different states, and the brain uses instructions and beliefs to select between those states.”

Previous work from Jazayeri’s group has found that the brain can control when it will initiate a movement by altering the speed at which patterns of neural activity evolve over time. Here, they found that the brain controls this speed flexibly based on two factors: external sensory inputs and adjustment of internal states, which correspond to knowledge about the rules of the task being performed.

Evan Remington, a McGovern Institute postdoc, is the lead author of the paper, which appears in the June 6 edition of Neuron. Other authors are former postdoc Devika Narain and MIT graduate student Eghbal Hosseini.

Ready, set, go

Neuroscientists believe that “cognitive flexibility,” or the ability to rapidly adapt to new information, resides in the brain’s higher cortical areas, but little is known about how the brain achieves this kind of flexibility.

To understand the new findings, it is useful to think of how switches and dials can be used to change the output of an electrical circuit. For example, in an amplifier, a switch may select the sound source by controlling the input to the circuit, and a dial may adjust the volume by controlling internal parameters such as a variable resistance. The MIT team theorized that the brain similarly transforms instructions and beliefs to inputs and internal states that control the behavior of neural circuits.

To test this, the researchers recorded neural activity in the frontal cortex of animals trained to perform a flexible timing task called “ready, set, go.” In this task, the animal sees two visual flashes — “ready” and “set” — that are separated by an interval anywhere between 0.5 and 1 second, and initiates a movement — “go” — some time after “set.” The animal has to initiate the movement such that the “set-go” interval is either the same as or 1.5 times the “ready-set” interval. The instruction for whether to use a multiplier of 1 or 1.5 is provided in each trial.

Neural signals recorded during the “set-go” interval clearly carried information about both the multiplier and the measured length of the “ready-set” interval, but the nature of these representations seemed bewilderingly complex. To decode the logic behind these representations, the researchers used the dynamical systems analysis framework. This analysis is used in the study of a wide range of physical systems, from simple electrical circuits to space shuttles.

The application of this approach to neural data in the “ready, set, go” task enabled Jazayeri and his colleagues to discover how the brain adjusts the inputs to and initial conditions of frontal cortex to control movement times flexibly. A switch-like operation sets the input associated with the correct multiplier, and a dial-like operation adjusts the state of neurons based on the “ready-set” interval. These two complementary control strategies allow the same hardware to produce different behaviors.

David Sussillo, a research scientist at Google Brain and an adjunct professor at Stanford University, says a key to the study was the research team’s development of new mathematical tools to analyze huge amounts of data from neuron recordings, allowing the researchers to uncover how a large population of neurons can work together to perform mental operations related to timing and rhythm.

“They have very rigorously brought the dynamical systems approach to the problem of timing,” says Sussillo, who was not involved in the research.

“A bridge between behavior and neurobiology”

Many unanswered questions remain about how the brain achieves this flexibility, the researchers say. They are now trying to find out what part of the brain sends information about the multiplier to the frontal cortex, and they also hope to study what happens in these neurons as they first learn tasks that require them to respond flexibly.

“We haven’t connected all the dots from behavioral flexibility to neurobiological details. But what we have done is to establish an algorithmic understanding based on the mathematics of dynamical systems that serves as a bridge between behavior and neurobiology,” Jazayeri says.

The researchers also hope to explore whether this type of model could help to explain behavior of other parts of the brain that have to perform computations flexibly.

The research was funded by the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the McKnight Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

Ann Graybiel wins 2018 Gruber Neuroscience Prize

Institute Professor Ann Graybiel, a professor in the Department of Brain and Cognitive Sciences and member of MIT’s McGovern Institute for Brain Research, is being recognized by the Gruber Foundation for her work on the structure, organization, and function of the once-mysterious basal ganglia. She was awarded the prize alongside Okihide Hikosaka of the National Institute of Health’s National Eye Institute and Wolfram Schultz of the University of Cambridge in the U.K.

The basal ganglia have long been known to play a role in movement, and the work of Graybiel and others helped to extend their roles to cognition and emotion. Dysfunction in the basal ganglia has been linked to a host of disorders including Parkinson’s disease, Huntington’s disease, obsessive-compulsive disorder and attention-deficit hyperactivity disorder, and to depression and anxiety disorders. Graybiel’s research focuses on the circuits thought to underlie these disorders, and on how these circuits act to help us form habits in everyday life.

“We are delighted that Ann has been honored with the Gruber Neuroscience Prize,” says Robert Desimone, director of the McGovern Institute. “Ann’s work has truly elucidated the complexity and functional importance of these forebrain structures. Her work has driven the field forward in a fundamental fashion, and continues to do so.’

Graybiel’s research focuses broadly on the striatum, a hub in basal ganglia-based circuits that is linked to goal-directed actions and habits. Prior to her work, the striatum was considered to be a primitive forebrain region. Graybiel found that the striatum instead has a complex architecture consisting of specialized zones: striosomes and the surrounding matrix. Her group went on to relate these zones to function, finding that striosomes and matrix differentially influence behavior. Among other important findings, Graybiel has shown that striosomes are focal points in circuits that link mood-related cortical regions with the dopamine-containing neurons of the midbrain, which are implicated in learning and motivation and which undergo degeneration in Parkinson’s disorder and other clinical conditions. She and her group have shown that these regions are activated by drugs of abuse, and that they influence decision-making, including decisions that require weighing of costs and benefits.

Graybiel continues to drive the field forward, finding that striatal neurons spike in an accentuated fashion and ‘bookend’ the beginning and end of behavioral sequences in rodents and primates. This activity pattern suggests that the striatum demarcates useful behavioral sequences such, in the case of rodents, pressing levers or running down mazes to receive a reward. Additionally, she and her group worked on miniaturized tools for chemical sensing and delivery as part of a continued drive toward therapeutic intervention in collaboration with the laboratories of Robert Langer in the Department of Chemical Engineering and Michael Cima, in the Department of Materials Science and Engineering.

“My first thought was of our lab, and how fortunate I am to work with such talented and wonderful people,” says Graybiel.  “I am deeply honored to be recognized by this prestigious award on behalf of our lab.”

The Gruber Foundation’s international prize program recognizes researchers in the areas of cosmology, neuroscience and genetics, and includes a cash award of $500,000 in each field. The medal given to award recipients also outlines the general mission of the foundation, “for the fundamental expansion of human knowledge,” and the prizes specifically honor those whose groundbreaking work fits into this paradigm.

Graybiel, a member of the MIT Class of 1971, has also previously been honored with the National Medal of Science, the Kavli Award, the James R. Killian Faculty Achievement Award at MIT, Woman Leader of Parkinson’s Science award from the Parkinson’s Disease Foundation, and has been recognized by the National Parkinson Foundation for her contributions to the understanding and treatment of Parkinson’s disease. Graybiel is a member of the National Academy of Sciences, the National Academy of Medicine, and the American Academy of Arts and Sciences.

The Gruber Neuroscience Prize will be presented in a ceremony at the annual meeting of the Society for Neuroscience in San Diego this coming November.

Study reveals how the brain tracks objects in motion

Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain’s approach is more complex.

The new findings suggest that in addition to tracking speed, the brain incorporates information about the rhythmic patterns of an object’s movement: for example, how long it takes a ball to complete one bounce. In their new study, the researchers found that people make much more accurate estimates when they have access to information about both the speed of a moving object and the timing of its rhythmic patterns.

“People get really good at this when they have both types of information available,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s like having input from multiple senses. The statistical knowledge that we have about the world we’re interacting with is richer when we use multiple senses.”

Jazayeri is the senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of March 5. The paper’s lead author is MIT graduate student Chia-Jung Chang.

Objects in motion

Much of the information we process about objects moving around us comes from visual tracking of the objects. Our brains can use information about an object’s speed and the distance it has to cover to calculate when it will reach a certain point. Jazayeri, who studies how the brain keeps time, was intrigued by the fact that much of the movement we see also has a rhythmic element, such as the bouncing of a ball.

“It occurred to us to ask, how can it be that the brain doesn’t use this information? It would seem very strange if all this richness of additional temporal structure is not part of the way we evaluate where things are around us and how things are going to happen,” Jazayeri says.

There are many other sensory processing tasks for which the brain uses multiple sources of input. For example, to interpret language, we use both the sound we hear and the movement of the speaker’s lips, if we can see them. When we touch an object, we estimate its size based on both what we see and what we feel with our fingers.

In the case of perceiving object motion, teasing out the role of rhythmic timing, as opposed to speed, can be difficult. “I can ask someone to do a task, but then how do I know if they’re using speed or they’re using time, if both of them are always available?” Jazayeri says.

To overcome that, the researchers devised a task in which they could control how much timing information was available. They measured performance in human volunteers as they performed the task.

During the task, the study participants watched a ball as it moved in a straight line. After traveling some distance, the ball went behind an obstacle, so the participants could no longer see it. They were asked to press a button at the time when they expected the ball to reappear.

Performance varied greatly depending on how much of the ball’s path was visible before it went behind the obstacle. If the participants saw the ball travel a very short distance before disappearing, they did not do well. As the distance before disappearance became longer, they were better able to calculate the ball’s speed, so their performance improved but eventually plateaued.

After that plateau, there was a significant jump in performance when the distance before disappearance grew until it was exactly the same as the width of the obstacle. In that case, when the path seen before disappearance was equal to the path the ball traveled behind the obstacle, the participants improved dramatically, because they knew that the time spent behind the obstacle would be the same as the time it took to reach the obstacle.

When the distance traveled to reach the obstacle became longer than the width of the obstacle, performance dropped again.

“It’s so important to have this extra information available, and when we have it, we use it,” Jazayeri says. “Temporal structure is so important that when you lose it, even at the expense of getting better visual information, people’s performance gets worse.”

Integrating information

The researchers also tested several computer models of how the brain performs this task, and found that the only model that could accurately replicate their experimental results was one in which the brain measures speed and timing in two different areas and then combines them.

Previous studies suggest that the brain performs timing estimates in premotor areas of the cortex, which plays a role in planning movement; speed, which usually requires visual input, is calculated in visual cortex. These inputs are likely combined in parts of the brain responsible for spatial attention and tracking objects in space, which occurs in the parietal cortex, Jazayeri says.

In future studies, Jazayeri hopes to measure brain activity in animals trained to perform the same task that human subjects did in this study. This could shed further light on where this processing takes place and could also reveal what happens in the brain when it makes incorrect estimates.

The research was funded by the McGovern Institute for Brain Research.

Study reveals molecular mechanisms of memory formation

MIT neuroscientists have uncovered a cellular pathway that allows specific synapses to become stronger during memory formation. The findings provide the first glimpse of the molecular mechanism by which long-term memories are encoded in a region of the hippocampus called CA3.

The researchers found that a protein called Npas4, previously identified as a master controller of gene expression triggered by neuronal activity, controls the strength of connections between neurons in the CA3 and those in another part of the hippocampus called the dentate gyrus. Without Npas4, long-term memories cannot form.

“Our study identifies an experience-dependent synaptic mechanism for memory encoding in CA3, and provides the first evidence for a molecular pathway that selectively controls it,” says Yingxi Lin, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Lin is the senior author of the study, which appears in the Feb. 8 issue of Neuron. The paper’s lead author is McGovern Institute research scientist Feng-Ju (Eddie) Weng.

Synaptic strength

Neuroscientists have long known that the brain encodes memories by altering the strength of synapses, or connections between neurons. This requires interactions of many proteins found in both presynaptic neurons, which send information about an event, and postsynaptic neurons, which receive the information.

Neurons in the CA3 region play a critical role in the formation of contextual memories, which are memories that link an event with the location where it took place, or with other contextual information such as timing or emotions. These neurons receive synaptic inputs from three different pathways, and scientists have hypothesized that one of these inputs, from the dentate gyrus, is critical for encoding new contextual memories. However, the mechanism of how this information is encoded was not known.

In a study published in 2011, Lin and colleagues found that Npas4, a gene that is turned on immediately following new experiences, appears to act as a master controller of the program of gene expression required for long-term memory formation. They also found that Npas4 is most active in the CA3 region of the hippocampus during learning. This activity was already known to be required for fast contextual learning, such is required during a type of task known as contextual fear conditioning. During the conditioning, mice receive a mild electric shock when they enter and explore a specific chamber. Within minutes, the mice learn to fear the chamber, and the next time they enter it, they freeze.

When the researchers knocked out the Npas4 gene, they found that mice could not remember the fearful event. They also found the same effect when they knocked out the gene just in the CA3 region of the hippocampus. Knocking it out in other parts of the hippocampus, however, had no effect on memory.

In the new study, the researchers explored in further detail how Npas4 exerts its effects. Lin’s lab had previously developed a method that makes it possible to fluorescently label CA3 neurons that are activated during this fear conditioning. Using the same fear conditioning process, the researchers showed that during learning, certain synaptic inputs to CA3 neurons are strengthened, but not others. Furthermore, this strengthening requires Npas4.

The inputs that are selectively strengthened come from another part of the hippocampus called the dentate gyrus. These signals convey information about the location where the fearful experience took place.

Without Npas4, synapses coming from the dentate gyrus to CA3 failed to strengthen, and the mice could not form memories of the event. Further experiments revealed that this strengthening is required specifically for memory encoding, not for retrieving memories already formed. The researchers also found that Npas4 loss did not affect synaptic inputs that CA3 neurons receive from other sources.

Kimberly Raab-Graham, an associate professor of physiology and pharmacology at Wake Forest University School of Medicine, says the researchers used an impressive variety of techniques to unequivocally show that contextual memory formation is tightly controlled by Npas4.

“The major finding of the study is that contextual memory is driven by a single circuit and comes down to a single transcription factor,” says Raab-Graham, who was not involved in the study. “When they knocked out the transcription factor, they removed contextual memory formation, and they could restore it by adding the transcription factor.”

Synapse maintenance

The researchers also identified one of the genes that Npas4 controls to exert this effect on synapse strength. This gene, known as plk2, is involved in shrinking postsynaptic structures. Npas4 turns on plk2, thereby reducing synapse size and strength. This suggests that Npas4 itself does not strengthen synapses, but maintains synapses in a state that allows them to be strengthened when necessary. Without Npas4, synapses become too strong and therefore cannot be induced to encode memories by further strengthening them.

“When you take out Npas4, the synaptic strength is almost saturated,” Lin says. “And then when learning takes place, although the memory-encoding cells can be fluorescently labeled, you no longer see the strengthening of those connections.”

In future work, Lin hopes to study how the circuit connecting the dentate gyrus to CA3 interacts with other pathways required for memory retrieval. “Somehow there’s some crosstalk between different pathways so that once the information is stored, it can be retrieved by the other inputs,” she says.

The research was funded by the National Institutes of Health, the James H. Ferry Fund, and a Swedish Brain Foundation Research Fellowship.

Distinctive brain pattern helps habits form

Our daily lives include hundreds of routine habits. Brushing our teeth, driving to work, or putting away the dishes are just a few of the tasks that our brains have automated to the point that we hardly need to think about them.

Although we may think of each of these routines as a single task, they are usually made up of many smaller actions, such as picking up our toothbrush, squeezing toothpaste onto it, and then lifting the brush to our mouth. This process of grouping behaviors together into a single routine is known as “chunking,” but little is known about how the brain groups these behaviors together.

MIT neuroscientists have now found that certain neurons in the brain are responsible for marking the beginning and end of these chunked units of behavior. These neurons, located in a brain region highly involved in habit formation, fire at the outset of a learned routine, go quiet while it is carried out, then fire again once the routine has ended.

This task-bracketing appears to be important for initiating a routine and then notifying the brain once it is complete, says Ann Graybiel, an Institute Professor at MIT, a member of the McGovern Institute for Brain Research, and the senior author of the study.

Nuné Martiros, a recent MIT PhD recipient who is now a postdoc at Harvard University, is the lead author of the paper, which appears in the Feb. 8 issue of Current Biology. Alexandra Burgess, a recent MIT graduate and technical associate at the McGovern Institute, is also an author of the paper.

Routine activation

Graybiel has previously shown that a part of the brain called the striatum, which is found in the basal ganglia, plays a major role in habit formation. Several years ago, she and her group found that neuron firing patterns in the striatum change as animals learn a new habit, such as turning to the right or left in a maze upon hearing a certain tone.

When the animal is just starting to learn the maze, these neurons fire continuously throughout the task. However, as the animal becomes better at making the correct turn to receive a reward, the firing becomes clustered at the very beginning of the task and at the very end. Once these patterns form, it becomes extremely difficult to break the habit.

However, these previous studies did not rule out other explanations for the pattern, including the possibility that it might be related to the motor commands required for the maze-running behavior. In the new study, Martiros and Graybiel set out to determine whether this firing pattern could be conclusively linked with the chunking of habitual behavior.

The researchers trained rats to press two levers in a particular sequence, for example, 1-2-2 or 2-1-2. The rats had to figure out what the correct sequence was, and if they did, they received a chocolate milk reward. It took several weeks for them to learn the task, and as they became more accurate, the researchers saw the same beginning-and-end firing patterns develop in the striatum that they had seen in their previous habit studies.

Because each rat learned a different sequence, the researchers could rule out the possibility that the patterns correspond to the motor input required to preform a particular series of movements. This offers strong evidence that the firing pattern corresponds specifically to the initiation and termination of a learned routine, the researchers say.

“I think this more or less proves that the development of bracketing patterns serves to package up a behavior that the brain — and the animals — consider valuable and worth keeping in their repertoire. It really is a high-level signal that helps to release that habit, and we think the end signal says the routine has been done,” Graybiel says.

Distinctive patterns

The researchers also discovered a distinct pattern in a set of inhibitory neurons in the striatum. Activity in these neurons, known as interneurons, displayed a strong inverse relationship with the activity of the excitatory neurons that produce the bracketing pattern.

“The interneurons were activated during the time when the rats were in the middle of performing the learned sequence, and could possibly be preventing the principal neurons from initiating another routine until the current one was finished. The discovery of this opposite activity by the interneurons also gets us one step closer to understanding how brain circuits can actually produce this pattern of activity,” Martiros says.

Graybiel’s lab is now investigating further how the interaction between these two groups of neurons helps to encode habitual behavior in the striatum.

The research was funded by the National Institutes of Health/National Institute of Mental Health, the Office of Naval Research, and a McGovern Institute Mark Gorenberg Fellowship.

Ultrathin needle can deliver drugs directly to the brain

MIT researchers have devised a miniaturized system that can deliver tiny quantities of medicine to brain regions as small as 1 cubic millimeter. This type of targeted dosing could make it possible to treat diseases that affect very specific brain circuits, without interfering with the normal function of the rest of the brain, the researchers say.

Using this device, which consists of several tubes contained within a needle about as thin as a human hair, the researchers can deliver one or more drugs deep within the brain, with very precise control over how much drug is given and where it goes. In a study of rats, they found that they could deliver targeted doses of a drug that affects the animals’ motor function.

“We can infuse very small amounts of multiple drugs compared to what we can do intravenously or orally, and also manipulate behavioral changes through drug infusion,” says Canan Dagdeviren, the LG Electronics Career Development Assistant Professor of Media Arts and Sciences and the lead author of the paper, which appears in the Jan. 24 issue of Science Translational Medicine.

“We believe this tiny microfabricated device could have tremendous impact in understanding brain diseases, as well as providing new ways of delivering biopharmaceuticals and performing biosensing in the brain,” says Robert Langer, the David H. Koch Institute Professor at MIT and one of the paper’s senior authors.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, is also a senior author of the paper.

Targeted action

Drugs used to treat brain disorders often interact with brain chemicals called neurotransmitters or the cell receptors that interact with neurotransmitters. Examples include l-dopa, a dopamine precursor used to treat Parkinson’s disease, and Prozac, used to boost serotonin levels in patients with depression. However, these drugs can have side effects because they act throughout the brain.

“One of the problems with central nervous system drugs is that they’re not specific, and if you’re taking them orally they go everywhere. The only way we can limit the exposure is to just deliver to a cubic millimeter of the brain, and in order to do that, you have to have extremely small cannulas,” Cima says.

The MIT team set out to develop a miniaturized cannula (a thin tube used to deliver medicine) that could target very small areas. Using microfabrication techniques, the researchers constructed tubes with diameters of about 30 micrometers and lengths up to 10 centimeters. These tubes are contained within a stainless steel needle with a diameter of about 150 microns. “The device is very stable and robust, and you can place it anywhere that you are interested,” Dagdeviren says.

The researchers connected the cannulas to small pumps that can be implanted under the skin. Using these pumps, the researchers showed that they could deliver tiny doses (hundreds of nanoliters) into the brains of rats. In one experiment, they delivered a drug called muscimol to a brain region called the substantia nigra, which is located deep within the brain and helps to control movement.

Previous studies have shown that muscimol induces symptoms similar to those seen in Parkinson’s disease. The researchers were able to generate those effects, which include stimulating the rats to continually turn in a clockwise direction, using their miniaturized delivery needle. They also showed that they could halt the Parkinsonian behavior by delivering a dose of saline through a different channel, to wash the drug away.

“Since the device can be customizable, in the future we can have different channels for different chemicals, or for light, to target tumors or neurological disorders such as Parkinson’s disease or Alzheimer’s,” Dagdeviren says.

This device could also make it easier to deliver potential new treatments for behavioral neurological disorders such as addiction or obsessive compulsive disorder, which may be caused by specific disruptions in how different parts of the brain communicate with each other.

“Even if scientists and clinicians can identify a therapeutic molecule to treat neural disorders, there remains the formidable problem of how to delivery the therapy to the right cells — those most affected in the disorder. Because the brain is so structurally complex, new accurate ways to deliver drugs or related therapeutic agents locally are urgently needed,” says Ann Graybiel, an MIT Institute Professor and a member of MIT’s McGovern Institute for Brain Research, who is also an author of the paper.

Measuring drug response

The researchers also showed that they could incorporate an electrode into the tip of the cannula, which can be used to monitor how neurons’ electrical activity changes after drug treatment. They are now working on adapting the device so it can also be used to measure chemical or mechanical changes that occur in the brain following drug treatment.

The cannulas can be fabricated in nearly any length or thickness, making it possible to adapt them for use in brains of different sizes, including the human brain, the researchers say.

“This study provides proof-of-concept experiments, in large animal models, that a small, miniaturized device can be safely implanted in the brain and provide miniaturized control of the electrical activity and function of single neurons or small groups of neurons. The impact of this could be significant in focal diseases of the brain, such as Parkinson’s disease,” says Antonio Chiocca, neurosurgeon-in-chief and chairman of the Department of Neurosurgery at Brigham and Women’s Hospital, who was not involved in the research.

The research was funded by the National Institutes of Health and the National Institute of Biomedical Imaging and Bioengineering.

How the brain keeps time

Timing is critical for playing a musical instrument, swinging a baseball bat, and many other activities. Neuroscientists have come up with several models of how the brain achieves its exquisite control over timing, the most prominent being that there is a centralized clock, or pacemaker, somewhere in the brain that keeps time for the entire brain.

However, a new study from MIT researchers provides evidence for an alternative timekeeping system that relies on the neurons responsible for producing a specific action. Depending on the time interval required, these neurons compress or stretch out the steps they take to generate the behavior at a specific time.

“What we found is that it’s a very active process. The brain is not passively waiting for a clock to reach a particular point,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT postdoc Jing Wang and former postdoc Devika Narain are the lead authors of the paper, which appears in the Dec. 4 issue of Nature Neuroscience. Graduate student Eghbal Hosseini is also an author of the paper.

Flexible control

One of the earliest models of timing control, known as the clock accumulator model, suggested that the brain has an internal clock or pacemaker that keeps time for the rest of the brain. A later variation of this model suggested that instead of using a central pacemaker, the brain measures time by tracking the synchronization between different brain wave frequencies.

Although these clock models are intuitively appealing, Jazayeri says, “they don’t match well with what the brain does.”

No one has found evidence for a centralized clock, and Jazayeri and others wondered if parts of the brain that control behaviors that require precise timing might perform the timing function themselves. “People now question why would the brain want to spend the time and energy to generate a clock when it’s not always needed. For certain behaviors you need to do timing, so perhaps the parts of the brain that subserve these functions can also do timing,” he says.

To explore this possibility, the researchers recorded neuron activity from three brain regions in animals as they performed a task at two different time intervals — 850 milliseconds or 1,500 milliseconds.

The researchers found a complicated pattern of neural activity during these intervals. Some neurons fired faster, some fired slower, and some that had been oscillating began to oscillate faster or slower. However, the researchers’ key discovery was that no matter the neurons’ response, the rate at which they adjusted their activity depended on the time interval required.

At any point in time, a collection of neurons is in a particular “neural state,” which changes over time as each individual neuron alters its activity in a different way. To execute a particular behavior, the entire system must reach a defined end state. The researchers found that the neurons always traveled the same trajectory from their initial state to this end state, no matter the interval. The only thing that changed was the rate at which the neurons traveled this trajectory.

When the interval required was longer, this trajectory was “stretched,” meaning the neurons took more time to evolve to the final state. When the interval was shorter, the trajectory was compressed.

“What we found is that the brain doesn’t change the trajectory when the interval changes, it just changes the speed with which it goes from the initial internal state to the final state,” Jazayeri says.

Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles, says that the study “provides beautiful evidence that timing is a distributed process in the brain — that is, there is no single master clock.”

“This work also supports the notion that the brain does not tell time using a clock-like mechanism, but rather relies on the dynamics inherent to neural circuits, and that as these dynamics increase and decrease in speed, animals move more quickly or slowly,” adds Buonomano, who was not involved in the research.

Neural networks

The researchers focused their study on a brain loop that connects three regions: the dorsomedial frontal cortex, the caudate, and the thalamus. They found this distinctive neural pattern in the dorsomedial frontal cortex, which is involved in many cognitive processes, and the caudate, which is involved in motor control, inhibition, and some types of learning. However, in the thalamus, which relays motor and sensory signals, they found a different pattern: Instead of altering the speed of their trajectory, many of the neurons simply increased or decreased their firing rate, depending on the interval required.

Jazayeri says this finding is consistent with the possibility that the thalamus is instructing the cortex on how to adjust its activity to generate a certain interval.

The researchers also created a computer model to help them further understand this phenomenon. They began with a model of hundreds of neurons connected together in random ways, and then trained it to perform the same interval-producing task they had used to train animals, offering no guidance on how the model should perform the task.

They found that these neural networks ended up using the same strategy that they observed in the animal brain data. A key discovery was that this strategy only works if some of the neurons have nonlinear activity — that is, the strength of their output doesn’t constantly increase as their input increases. Instead, as they receive more input, their output increases at a slower rate.

Jazayeri now hopes to explore further how the brain generates the neural patterns seen during varying time intervals, and also how our expectations influence our ability to produce different intervals.

The research was funded by the Rubicon Grant from the Netherlands Scientific Organization, the National Institutes of Health, the Sloan Foundation, the Klingenstein Foundation, the Simons Foundation, the Center for Sensorimotor Neural Engineering, and the McGovern Institute.

Stress can lead to risky decisions

Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.

MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.

The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.

“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.

Graybiel is the senior author of the paper, which appears in Cell on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.

Hard decisions

In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.

In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.

Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.

However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.

“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.

The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.

“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the Cell paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.

Circuit dynamics

The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.

When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.

“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”

Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.

“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.

The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.

Next-generation optogenetic molecules control single neurons

Researchers at MIT and Paris Descartes University have developed a new optogenetic technique that sculpts light to target individual cells bearing engineered light-sensitive molecules, so that individual neurons can be precisely stimulated.

Until now, it has been challenging to use optogenetics to target single cells with such precise control over both the timing and location of the activation. This new advance paves the way for studies of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.

“Ideally what you would like to do is play the brain like a piano. You would want to control neurons independently, rather than having them all march in lockstep the way traditional optogenetics works, but which normally the brain doesn’t do,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

The new technique relies on a new type of light-sensitive protein that can be embedded in neuron cell bodies, combined with holographic light-shaping that can focus light on a single cell.

Boyden and Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, are the senior authors of the study, which appears in the Nov. 13 issue of Nature Neuroscience. The lead authors are MIT postdoc Or Shemesh and CNRS postdocs Dimitrii Tanese and Valeria Zampini.

Precise control

More than 10 years ago, Boyden and his collaborators first pioneered the use of light-sensitive proteins known as microbial opsins to manipulate neuron electrical activity. These opsins can be embedded into the membranes of neurons, and when they are exposed to certain wavelengths of light, they silence or stimulate the cells.

Over the past decade, scientists have used this technique to study how populations of neurons behave during brain tasks such as memory recall or habit formation. Traditionally, many cells are targeted simultaneously because the light shining into the brain strikes a relatively large area. However, as Boyden points out, neurons may have different functions even when they are near each other.

“Two adjacent cells can have completely different neural codes. They can do completely different things, respond to different stimuli, and play different activity patterns during different tasks,” he says.

To achieve independent control of single cells, the researchers combined two new advances: a localized, more powerful opsin and an optimized holographic light-shaping microscope.

For the opsin, the researchers used a protein called CoChR, which the Boyden lab discovered in 2014. They chose this molecule because it generates a very strong electric current in response to light (about 10 times stronger than that produced by channelrhodopsin-2, the first protein used for optogenetics).

They fused CoChR to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body. This helps to prevent crosstalk between neurons, since light that activates one neuron can also strike axons and dendrites of other neurons that intertwine with the target neuron.

Boyden then worked with Emiliani to combine this approach with a light-stimulation technique that she had previously developed, known as two-photon computer-generated holography (CGH). This can be used to create three-dimensional sculptures of light that envelop a target cell.

Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer generated holography, the interferogram is calculated by a computer without the need of any original object. Years ago, Emiliani’s research group demonstrated that combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.

In the new study, by combining this approach with new opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” Boyden says.

Mapping connections

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This paves the way for possible diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

Kirill Volynski, a professor at the Institute of Neurology at University College London, who was not involved in the research, plans to use the new technology in his studies of diseases caused by mutations of proteins involved in synaptic communication between neurons.

“This gives us a very nice tool to study those mutations and those disorders,” Volynski says. “We expect this to enable a major improvement in the specificity of stimulating neurons that have mutated synaptic proteins.”

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.