Stress can lead to risky decisions

Making decisions is not always easy, especially when choosing between two options that have both positive and negative elements, such as deciding between a job with a high salary but long hours, and a lower-paying job that allows for more leisure time.

MIT neuroscientists have now discovered that making decisions in this type of situation, known as a cost-benefit conflict, is dramatically affected by chronic stress. In a study of mice, they found that stressed animals were far likelier to choose high-risk, high-payoff options.

The researchers also found that impairments of a specific brain circuit underlie this abnormal decision making, and they showed that they could restore normal behavior by manipulating this circuit. If a method for tuning this circuit in humans were developed, it could help patients with disorders such as depression, addiction, and anxiety, which often feature poor decision-making.

“One exciting thing is that by doing this very basic science, we found a microcircuit of neurons in the striatum that we could manipulate to reverse the effects of stress on this type of decision making. This to us is extremely promising, but we are aware that so far these experiments are in rats and mice,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research.

Graybiel is the senior author of the paper, which appears in Cell on Nov. 16. The paper’s lead author is Alexander Friedman, a McGovern Institute research scientist.

Hard decisions

In 2015, Graybiel, Friedman, and their colleagues first identified the brain circuit involved in decision making that involves cost-benefit conflict. The circuit begins in the medial prefrontal cortex, which is responsible for mood control, and extends into clusters of neurons called striosomes, which are located in the striatum, a region associated with habit formation, motivation, and reward reinforcement.

In that study, the researchers trained rodents to run a maze in which they had to choose between one option that included highly concentrated chocolate milk, which they like, along with bright light, which they don’t like, and an option with dimmer light but weaker chocolate milk. By inhibiting the connection between cortical neurons and striosomes, using a technique known as optogenetics, they found that they could transform the rodents’ preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

In the new study, the researchers performed a similar experiment without optogenetic manipulations. Instead, they exposed the rodents to a short period of stress every day for two weeks.

Before experiencing stress, normal rats and mice would choose to run toward the maze arm with dimmer light and weaker chocolate milk about half the time. The researchers gradually increased the concentration of chocolate milk found in the dimmer side, and as they did so, the animals began choosing that side more frequently.

However, when chronically stressed rats and mice were put in the same situation, they continued to choose the bright light/better chocolate milk side even as the chocolate milk concentration greatly increased on the dimmer side. This was the same behavior the researchers saw in rodents that had the prefrontal cortex-striosome circuit disrupted optogenetically.

“The result is that the animal ignores the high cost and chooses the high reward,” Friedman says.

The findings help to explain how stress contributes to substance abuse and may worsen mental disorders, says Amy Arnsten, a professor of neuroscience and psychology at the Yale University School of Medicine, who was not involved in the research.

“Stress is ubiquitous, for both humans and animals, and its effects on brain and behavior are of central importance to the understanding of both normal function and neuropsychiatric disease. It is both pernicious and ironic that chronic stress can lead to impulsive action; in many clinical cases, such as drug addiction, impulsivity is likely to worsen patterns of behavior that produce the stress in the first place, inducing a vicious cycle,” Arnsten wrote in a commentary accompanying the Cell paper, co-authored by Daeyeol Lee and Christopher Pittenger of the Yale University School of Medicine.

Circuit dynamics

The researchers believe that this circuit integrates information about the good and bad aspects of possible choices, helping the brain to produce a decision. Normally, when the circuit is turned on, neurons of the prefrontal cortex activate certain neurons called high-firing interneurons, which then suppress striosome activity.

When the animals are stressed, these circuit dynamics shift and the cortical neurons fire too late to inhibit the striosomes, which then become overexcited. This results in abnormal decision making.

“Somehow this prior exposure to chronic stress controls the integration of good and bad,” Graybiel says. “It’s as though the animals had lost their ability to balance excitation and inhibition in order to settle on reasonable behavior.”

Once this shift occurs, it remains in effect for months, the researchers found. However, they were able to restore normal decision making in the stressed mice by using optogenetics to stimulate the high-firing interneurons, thereby suppressing the striosomes. This suggests that the prefronto-striosome circuit remains intact following chronic stress and could potentially be susceptible to manipulations that would restore normal behavior in human patients whose disorders lead to abnormal decision making.

“This state change could be reversible, and it’s possible in the future that you could target these interneurons and restore the excitation-inhibition balance,” Friedman says.

The research was funded by the National Institutes of Health/National Institute for Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency and the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, the William N. and Bernice E. Bumpus Foundation, Michael Stiefel, the Saks Kavanaugh Foundation, and John Wasserlein and Lucille Braun.

Making brain implants smaller could prolong their lifespan

Many diseases, including Parkinson’s disease, can be treated with electrical stimulation from an electrode implanted in the brain. However, the electrodes can produce scarring, which diminishes their effectiveness and can necessitate additional surgeries to replace them.

MIT researchers have now demonstrated that making these electrodes much smaller can essentially eliminate this scarring, potentially allowing the devices to remain in the brain for much longer.

“What we’re doing is changing the scale and making the procedure less invasive,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study, which appears in the May 16 issue of Scientific Reports.

Cima and his colleagues are now designing brain implants that can not only deliver electrical stimulation but also record brain activity or deliver drugs to very targeted locations.

The paper’s lead author is former MIT graduate student Kevin Spencer. Other authors are former postdoc Jay Sy, graduate student Khalil Ramadi, Institute Professor Ann Graybiel, and David H. Koch Institute Professor Robert Langer.

Effects of size

Many Parkinson’s patients have benefited from treatment with low-frequency electrical current delivered to a part of the brain involved in movement control. The electrodes used for this deep brain stimulation are a few millimeters in diameter. After being implanted, they gradually generate scar tissue through the constant rubbing of the electrode against the surrounding brain tissue. This process, known as gliosis, contributes to the high failure rate of such devices: About half stop working within the first six months.

Previous studies have suggested that making the implants smaller or softer could reduce the amount of scarring, so the MIT team set out to measure the effects of both reducing the size of the implants and coating them with a soft polyethylene glycol (PEG) hydrogel.

The hydrogel coating was designed to have an elasticity very similar to that of the brain. The researchers could also control the thickness of the coating. They found that when coated electrodes were pushed into the brain, the soft coating would fall off, so they devised a way to apply the hydrogel and then dry it, so that it becomes a hard, thin film. After the electrode is inserted, the film soaks up water and becomes soft again.

In mice, the researchers tested both coated and uncoated glass fibers with varying diameters and found that there is a tradeoff between size and softness. Coated fibers produced much less scarring than uncoated fibers of the same diameter. However, as the electrode fibers became smaller, down to about 30 microns (0.03 millimeters) in diameter, the uncoated versions produced less scarring, because the coatings increase the diameter.

This suggests that a 30-micron, uncoated fiber is the optimal design for implantable devices in the brain.

“Before this paper, no one really knew the effects of size,” Cima says. “Softer is better, but not if it makes the electrode larger.”

New devices

The question now is whether fibers that are only 30 microns in diameter can be adapted for electrical stimulation, drug delivery, and recording electrical activity in the brain. Cima and his colleagues have had some initial success developing such devices.

“It’s one of those things that at first glance seems impossible. If you have 30-micron glass fibers, that’s slightly thicker than a piece of hair. But it is possible to do,” Cima says.
Such devices could be potentially useful for treating Parkinson’s disease or other neurological disorders. They could also be used to remove fluid from the brain to monitor whether treatments are having the intended effect, or to measure brain activity that might indicate when an epileptic seizure is about to occur.

The research was funded by the National Institutes of Health and MIT’s Institute for Soldier Nanotechnologies.

Precise technique tracks dopamine in the brain

MIT researchers have devised a way to measure dopamine in the brain much more precisely than previously possible, which should allow scientists to gain insight into dopamine’s roles in learning, memory, and emotion.

Dopamine is one of the many neurotransmitters that neurons in the brain use to communicate with each other. Previous systems for measuring these neurotransmitters have been limited in how long they provide accurate readings and how much of the brain they can cover. The new MIT device, an array of tiny carbon electrodes, overcomes both of those obstacles.

“Nobody has really measured neurotransmitter behavior at this spatial scale and timescale. Having a tool like this will allow us to explore potentially any neurotransmitter-related disease,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Furthermore, because the array is so tiny, it has the potential to eventually be adapted for use in humans, to monitor whether therapies aimed at boosting dopamine levels are succeeding. Many human brain disorders, most notably Parkinson’s disease, are linked to dysregulation of dopamine.

“Right now deep brain stimulation is being used to treat Parkinson’s disease, and we assume that that stimulation is somehow resupplying the brain with dopamine, but no one’s really measured that,” says Helen Schwerdt, a Koch Institute postdoc and the lead author of the paper, which appears in the journal Lab on a Chip.

Studying the striatum

For this project, Cima’s lab teamed up with David H. Koch Institute Professor Robert Langer, who has a long history of drug delivery research, and Institute Professor Ann Graybiel, who has been studying dopamine’s role in the brain for decades with a particular focus on a brain region called the striatum. Dopamine-producing cells within the striatum are critical for habit formation and reward-reinforced learning.

Until now, neuroscientists have used carbon electrodes with a shaft diameter of about 100 microns to measure dopamine in the brain. However, these can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine, and other types of interfering films can also form on the electrode surface over time. Furthermore, there is only about a 50 percent chance that a single electrode will end up in a spot where there is any measurable dopamine, Schwerdt says.

The MIT team designed electrodes that are only 10 microns in diameter and combined them into arrays of eight electrodes. These delicate electrodes are then wrapped in a rigid polymer called PEG, which protects them and keeps them from deflecting as they enter the brain tissue. However, the PEG is dissolved during the insertion so it does not enter the brain.

These tiny electrodes measure dopamine in the same way that the larger versions do. The researchers apply an oscillating voltage through the electrodes, and when the voltage is at a certain point, any dopamine in the vicinity undergoes an electrochemical reaction that produces a measurable electric current. Using this technique, dopamine’s presence can be monitored at millisecond timescales.

Using these arrays, the researchers demonstrated that they could monitor dopamine levels in many parts of the striatum at once.

“What motivated us to pursue this high-density array was the fact that now we have a better chance to measure dopamine in the striatum, because now we have eight or 16 probes in the striatum, rather than just one,” Schwerdt says.

The researchers found that dopamine levels vary greatly across the striatum. This was not surprising, because they did not expect the entire region to be continuously bathed in dopamine, but this variation has been difficult to demonstrate because previous methods measured only one area at a time.

How learning happens

The researchers are now conducting tests to see how long these electrodes can continue giving a measurable signal, and so far the device has kept working for up to two months. With this kind of long-term sensing, scientists should be able to track dopamine changes over long periods of time, as habits are formed or new skills are learned.

“We and other people have struggled with getting good long-term readings,” says Graybiel, who is a member of MIT’s McGovern Institute for Brain Research. “We need to be able to find out what happens to dopamine in mouse models of brain disorders, for example, or what happens to dopamine when animals learn something.”

She also hopes to learn more about the roles of structures in the striatum known as striosomes. These clusters of cells, discovered by Graybiel many years ago, are distributed throughout the striatum. Recent work from her lab suggests that striosomes are involved in making decisions that induce anxiety.

This study is part of a larger collaboration between Cima’s and Graybiel’s labs that also includes efforts to develop injectable drug-delivery devices to treat brain disorders.

“What links all these studies together is we’re trying to find a way to chemically interface with the brain,” Schwerdt says. “If we can communicate chemically with the brain, it makes our treatment or our measurement a lot more focused and selective, and we can better understand what’s going on.”

Other authors of the paper are McGovern Institute research scientists Minjung Kim, Satoko Amemori, and Hideki Shimazu; McGovern Institute postdoc Daigo Homma; McGovern Institute technical associate Tomoko Yoshida; and undergraduates Harshita Yerramreddy and Ekin Karasan.

The research was funded by the National Institutes of Health, the National Institute of Biomedical Imaging and Bioengineering, and the National Institute of Neurological Disorders and Stroke.

Newly discovered neural connections may be linked to emotional decision-making

MIT neuroscientists have discovered connections deep within the brain that appear to form a communication pathway between areas that control emotion, decision-making, and movement. The researchers suspect that these connections, which they call striosome-dendron bouquets, may be involved in controlling how the brain makes decisions that are influenced by emotion or anxiety.

This circuit may also be one of the targets of the neural degeneration seen in Parkinson’s disease, says Ann Graybiel, an Institute Professor at MIT, member of the McGovern Institute for Brain Research, and the senior author of the study.

Graybiel and her colleagues were able to find these connections using a technique developed at MIT known as expansion microscopy, which enables scientists to expand brain tissue before imaging it. This produces much higher-resolution images than would otherwise be possible with conventional microscopes.

That technique was developed in the lab of Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at the MIT Media Lab, who is also an author of this study. Jill Crittenden, a research scientist at the McGovern Institute, is the lead author of the paper, which appears in the Proceedings of the National Academy of Sciences the week of Sept. 19.

Tracing a circuit

In this study, the researchers focused on a small region of the brain known as the striatum, which is part of the basal ganglia — a cluster of brain centers associated with habit formation, control of voluntary movement, emotion, and addiction. Malfunctions of the basal ganglia have been associated with Parkinson’s and Huntington’s diseases, as well as autism, obsessive-compulsive disorder, and Tourette’s syndrome.

Much of the striatum is uncharted territory, but Graybiel’s lab has previously identified clusters of cells there known as striosomes. She also found that these clusters receive very specific input from parts of the brain’s prefrontal cortex involved in processing emotions, and showed that this communication pathway is necessary for making decisions that require an anxiety-provoking cost-benefit analysis, such as choosing whether to take a job that pays more but forces a move away from family and friends.

Her studies also suggested that striosomes relay information to cells within a region called the substantia nigra, one of the brain’s main dopamine-producing centers. Dopamine has many functions in the brain, including roles in initiating movement and regulating mood.

To figure out how these regions might be communicating, Graybiel, Crittenden, and their colleagues used expansion microscopy to image the striosomes and discovered extensive connections between those clusters of cells and dopamine-producing cells of the substantia nigra. The dopamine-producing cells send down many tiny extensions known as dendrites that become entwined with axons that come up to meet them from the striosomes, forming a bouquet-like structure.

“With expansion microscopy, we could finally see direct connections between these cells by unraveling their unusual rope-like bundles of axons and dendrites,” Crittenden says. “What’s really exciting to us is we can see that it’s small discrete clusters of dopamine cells with bundles that are being targeted.”

Hard decisions

This finding expands the known decision-making circuit so that it encompasses the prefrontal cortex, striosomes, and a subset of dopamine-producing cells. Together, the striosomes may be acting as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, which is initiated by the dopamine-producing cells, the researchers say.

To explore that possibility, the researchers plan to study mice in which they can selectively activate or shut down the striosome-dendron bouquet as the mice are prompted to make decisions requiring a cost-benefit analysis.

The researchers also plan to investigate whether these connections are disrupted in mouse models of Parkinson’s disease. MRI studies and postmortem analysis of brains of Parkinson’s patients have shown that death of dopamine cells in the substantia nigra is strongly correlated with the disease, but more work is needed to determine if this subset overlaps with the dopamine cells that form the striosome-dendron bouquets.

How we make emotional decisions

Some decisions arouse far more anxiety than others. Among the most anxiety-provoking are those that involve options with both positive and negative elements, such choosing to take a higher-paying job in a city far from family and friends, versus choosing to stay put with less pay.

MIT researchers have now identified a neural circuit that appears to underlie decision-making in this type of situation, which is known as approach-avoidance conflict. The findings could help researchers to discover new ways to treat psychiatric disorders that feature impaired decision-making, such as depression, schizophrenia, and borderline personality disorder.

“In order to create a treatment for these types of disorders, we need to understand how the decision-making process is working,” says Alexander Friedman, a research scientist at MIT’s McGovern Institute for Brain Research and the lead author of a paper describing the findings in the May 28 issue of Cell.

Friedman and colleagues also demonstrated the first step toward developing possible therapies for these disorders: By manipulating this circuit in rodents, they were able to transform a preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

The paper’s senior author is Ann Graybiel, an MIT Institute Professor and member of the McGovern Institute. Other authors are postdoc Daigo Homma, research scientists Leif Gibb and Ken-ichi Amemori, undergraduates Samuel Rubin and Adam Hood, and technical assistant Michael Riad.

Making hard choices

The new study grew out of an effort to figure out the role of striosomes — clusters of cells distributed through the the striatum, a large brain region involved in coordinating movement and emotion and implicated in some human disorders. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

Previous studies from Graybiel’s lab identified regions of the brain’s prefrontal cortex that project to striosomes. These regions have been implicated in processing emotions, so the researchers suspected that this circuit might also be related to emotion.

To test this idea, the researchers studied mice as they performed five different types of behavioral tasks, including an approach-avoidance scenario. In that situation, rats running a maze had to choose between one option that included strong chocolate, which they like, and bright light, which they don’t, and an option with dimmer light but weaker chocolate.

When humans are forced to make these kinds of cost-benefit decisions, they usually experience anxiety, which influences the choices they make. “This type of task is potentially very relevant to anxiety disorders,” Gibb says. “If we could learn more about this circuitry, maybe we could help people with those disorders.”

The researchers also tested rats in four other scenarios in which the choices were easier and less fraught with anxiety.

“By comparing performance in these five tasks, we could look at cost-benefit decision-making versus other types of decision-making, allowing us to reach the conclusion that cost-benefit decision-making is unique,” Friedman says.

Using optogenetics, which allowed them to turn cortical input to the striosomes on or off by shining light on the cortical cells, the researchers found that the circuit connecting the cortex to the striosomes plays a causal role in influencing decisions in the approach-avoidance task, but none at all in other types of decision-making.

When the researchers shut off input to the striosomes from the cortex, they found that the rats began choosing the high-risk, high-reward option as much as 20 percent more often than they had previously chosen it. If the researchers stimulated input to the striosomes, the rats began choosing the high-cost, high-reward option less often.

Paul Glimcher, a professor of physiology and neuroscience at New York University, describes the study as a “masterpiece” and says he is particularly impressed by the use of a new technology, optogenetics, to solve a longstanding mystery. The study also opens up the possibility of studying striosome function in other types of decision-making, he adds.

“This cracks the 20-year puzzle that [Graybiel] wrote — what do the striosomes do?” says Glimcher, who was not part of the research team. “In 10 years we will have a much more complete picture, of which this paper is the foundational stone. She has demonstrated that we can answer this question, and answered it in one area. A lot of labs will now take this up and resolve it in other areas.”

Emotional gatekeeper

The findings suggest that the striatum, and the striosomes in particular, may act as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, the researchers say.

That gatekeeper circuit also appears to include a part of the midbrain called the substantia nigra, which has dopamine-containing cells that play an important role in motivation and movement. The researchers believe that when activated by input from the striosomes, these substantia nigra cells produce a long-term effect on an animal or human patient’s decision-making attitudes.

“We would so like to find a way to use these findings to relieve anxiety disorder, and other disorders in which mood and emotion are affected,” Graybiel says. “That kind of work has a real priority to it.”

In addition to pursuing possible treatments for anxiety disorders, the researchers are now trying to better understand the role of the dopamine-containing substantia nigra cells in this circuit, which plays a critical role in Parkinson’s disease and may also be involved in related disorders.

The research was funded by the National Institute of Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency, the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, and the William N. and Bernice E. Bumpus Foundation.

Bold new microscopies for the brain

McGovern researchers create unexpected new approaches to microscopy that are changing the way scientists look at the brain.

Ask McGovern Investigator Ed Boyden about his ten-year plan and you’ll get an immediate and straight-faced answer: “We would like to understand the brain.”

He means it. Boyden intends to map all of the cells in a brain, all of their connections, and even all of the molecules that form those connections and determine their strengths. He also plans to study how information flows through the brain and to use this to generate a working model. “I’d love to be able to load a map of an entire brain into a computer and see if we can simulate the brain,” he says.

Boyden likens the process to reverse-engineering a computer by opening it up and looking inside. The analogy, though not perfect, provides a sense of the enormity of the task ahead. As complicated as computers are, brains are far more complex, and they are also much harder to visualize, given the need to see features at multiple scales. For example, signals travel from cell to cell through synaptic connections that are measured in nanometers, but the signals are then propagated along nerve fibers that may span several centimeters—a difference of more than a million-fold. Modern microscopes make it possible to study features at one scale or the other, but not both together. Similarly, there are methods for visualizing electrical activity in single neurons or in whole brains, but there is no way to see both at once. So Boyden is building his own tools, and in the process is pushing the limits of imagination. “Our group is often trying to do the opposite of what other people do,” Boyden says.

Boyden’s new methods are part of a broader push to understand the brain’s connectivity, an objective that gained impetus two years ago with the President’s BRAIN Initiative, and with allied efforts such as the NIH-funded Human Connectome Project. Hundreds of researchers have already downloaded Boyden’s recently published protocols, including colleagues at the McGovern Institute who are using them to advance their studies of brain function and disease.

Just add water

Under the microscope, the brain section prepared by Jill Crittenden looks like a tight bundle of threads. The nerve fibers are from a mouse brain, from a region known to degenerate in humans with Parkinson’s disease. The loss of the tiny synaptic connections between these fibers may be the earliest signs of degeneration, so Crittenden, a research scientist who has been studying this disease for several years in the lab of McGovern Investigator Ann Graybiel, wants to be able to see them.

But she can’t. They are far too small— smaller than a wavelength of light, meaning they are beyond the limit for optical microscopy. To bring these structures into view, one of Boyden’s technologies, called expansion microscopy (ExM), simply makes the specimen bigger, allowing it to be viewed on a conventional laboratory microscope.

The idea is at once obvious and fantastical. “Expansion microscopy is the kind of thing scientists daydream about,” says Paul Tillberg, a graduate student in Boyden’s lab. “You either shrink the scientist or expand the specimen.”

Leaving Crittenden’s sample in place, Tillberg adds water. Minutes later, the tissue has expanded and become transparent, a ghostly and larger version of its former self.

Crittenden takes another look through the scope. “It’s like someone has loosened up all the fibers. I can see each one independently, and see them interconnecting,” she says. “ExM will add a lot of power to the tools we’ve developed for visualizing the connections we think are degenerating.”

It took Tillberg and his fellow graduate student Fei Chen several months of brainstorming to find a plausible way to make ExM a reality. They had found inspiration in the work of MIT physicist Toyoichi Tanaka, who in the 1970s had studied smart gels, polymers that rapidly expand in response to a change in environment. One familiar example is the absorbent material in baby diapers, and Boyden’s team turned to this substance for the expansion technique.

The process they devised involves several steps. The tissue is first labeled using fluorescent antibodies that bind to molecules of interest, and then it is impregnated with the gel-forming material. Once the gel has set, the fluorescent markers are anchored to the gel, and the original tissue sample is digested, allowing the gel to stretch evenly in all directions.

When water is added, the gel expands and the fluorescent markers spread out like a picture on a balloon. Remarkably, the 3D shapes of even the finest structures are faithfully preserved during the expansion, making it possible to see them using a conventional microscope. By labeling molecules with different colors, the researchers can even distinguish pre-synaptic from post-synaptic structures. Boyden plans eventually to use hundreds, possibly thousands, of colors, and to increase the expansion factor to 10 times original size, equivalent to a 1000-fold increase in volume.

ExM is not the only way to see fine structures such as synapses; they can also be visualized by electron microcopy, or by recently-developed ‘super-resolution’ optical methods that garnered a 2014 Nobel Prize. These techniques, however, require expensive equipment, and the images are very time-consuming to produce.

“With ExM, because the sample is physically bigger, you can scan it very quickly using just a regular microscope,” says Boyden.

Boyden is already talking to other leading researchers in the field, including Kwanghun Chung at MIT and George Church at Harvard, about ways to further enhance the ExM method. Within the McGovern Institute, among those who expect to benefit from these advances is Guoping Feng, who is developing mouse models of autism, schizophrenia and other disorders by introducing some of the same genetic changes seen in humans with these disorders. Many of the genes associated with autism and schizophrenia play a role in the formation of synapses, but even with the mouse models at his disposal, Feng isn’t sure what goes wrong with them because they are so hard to see. “If we can make parts of the brain bigger, we might be able to see how the assembly of this synaptic machinery changes in different disorders,” he says.

3D Movies Without Special Glasses

Another challenge facing Feng and many other researchers is that many brain functions, and many brain diseases, are not confined to one area, but are widely distributed across the brain. Trying to understand these processes by looking through a small microscopic window has been compared to watching a soccer game by observing just a single square foot of the playing field.

No current technology can capture millisecond-by-millisecond electrical events across the entire living brain, so Boyden and collaborators in Vienna, Austria, decided to develop one. They turned to a method called light field microscopy (LFM) as a way to capture 3D movies of an animal’s thoughts as they flash through the entire nervous system.

The idea is mind-boggling to imagine, but the hardware is quite simple. The instrument records images in depth the same way humans do, using multiple ‘eyes’ to send slightly offset 2D images to a computer that can reconstruct a 3D image of the world. (The idea had been developed in the 1990s by Boyden’s MIT colleague Ted Adelson, and a similar method was used to create Google Street View.) Boyden and his collaborators started with a microscope of standard design, attached a video camera, and inserted between them a six-by-six array of miniature lenses, designed in Austria, that projects a grid of offset images into the camera and the computer.

The rest is math. “We take the multiple, superimposed flat images projected through the lens array and combine them into a volume,” says Young-Gyu Yoon, a graduate student in the Boyden lab who designed and wrote the software.

Another graduate student, Nikita Pak, used the new method to measure neural activity in C. elegans, a tiny worm whose entire nervous system consists of just 302 neurons. By using a worm that had been genetically engineered so that its neurons light up when they become electrically active, Pak was able to make 3D movies of the activity in the entire nervous system. “The setup is just so simple,” he says. “Every time I use it, I think it’s cool.”

The team then tested their method on a larger brain, that of the larval zebra fish. They presented the larvae with a noxious odor, and found that it triggered activity in around 5000 neurons, over a period of about three minutes. Even with this relatively simple example, activity is distributed widely throughout the brain, and would be difficult to detect with previous techniques. Boyden is now working towards recording activity over much longer timespans, and he also envisions scaling it up to image the much more complex brains of mammals.

He hopes to start with the smallest known mammal, the Etruscan shrew. This animal resembles a mouse, but it is ten times smaller, no bigger than a thimble. Its brain is also much smaller, with only a few million neurons, compared to 100 million in a mouse.

Whole brain imaging in this tiny creature could provide an unprecedented view of mammalian brain activity, including its disruption in disease states. Feng cites sensory overload in autism as an example. “If we can see how sensory activity spreads through the brain, we can start to understand how overload starts and how it spills over to other brain areas,” he says.

Visions of Convergence

While Boyden’s microscopy technologies are providing his colleagues with new ways to study brain disorders, Boyden himself hopes to use them to understand the brain as a whole. He plans to use ExM to map connections and identify which molecules are where; 3D whole-brain imaging to trace brain activity as it unfolds in real time, and optogenetics techniques to stimulate the brain and directly record the resulting activity. By combining all three tools together, he hopes to pin stimuli and activity to the molecules and connections on the map and then use that to build a computational model that simulates brain activity.

The plan is grandiose, and the tools aren’t all ready yet, but to make the scheme plausible in the proposed timeframe, Boyden is adhering to a few principles. His methods are fast, capturing information-dense images rapidly rather than scanning over days, and inclusive, imaging whole brains rather than chunks that need to be assembled. They are also accessible, so researchers don’t need to spend large sums to acquire specialized equipment or expertise in-house.

The challenges ahead might appear insurmountable at times, but Boyden is undeterred. He moves forward, his mind open to even the most far-fetched ideas, because they just might work.

Are we there yet?

“Are we there yet?”

As anyone who has traveled with young children knows, maintaining focus on distant goals can be a challenge. A new study from MIT suggests how the brain achieves this task, and indicates that the neurotransmitter dopamine may signal the value of long-term rewards. The findings may also explain why patients with Parkinson’s disease — in which dopamine signaling is impaired — often have difficulty in sustaining motivation to finish tasks.

The work is described this week in the journal Nature.

Previous studies have linked dopamine to rewards, and have shown that dopamine neurons show brief bursts of activity when animals receive an unexpected reward. These dopamine signals are believed to be important for reinforcement learning, the process by which an animal learns to perform actions that lead to reward.

Taking the long view

In most studies, that reward has been delivered within a few seconds. In real life, though, gratification is not always immediate: Animals must often travel in search of food, and must maintain motivation for a distant goal while also responding to more immediate cues. The same is true for humans: A driver on a long road trip must remain focused on reaching a final destination while also reacting to traffic, stopping for snacks, and entertaining children in the back seat.

The MIT team, led by Institute Professor Ann Graybiel — who is also an investigator at MIT’s McGovern Institute for Brain Research — decided to study how dopamine changes during a maze task approximating work for delayed gratification. The researchers trained rats to navigate a maze to reach a reward. During each trial a rat would hear a tone instructing it to turn either right or left at an intersection to find a chocolate milk reward.

Rather than simply measuring the activity of dopamine-containing neurons, the MIT researchers wanted to measure how much dopamine was released in the striatum, a brain structure known to be important in reinforcement learning. They teamed up with Paul Phillips of the University of Washington, who has developed a technology called fast-scan cyclic voltammetry (FSCV) in which tiny, implanted, carbon-fiber electrodes allow continuous measurements of dopamine concentration based on its electrochemical fingerprint.

“We adapted the FSCV method so that we could measure dopamine at up to four different sites in the brain simultaneously, as animals moved freely through the maze,” explains first author Mark Howe, a former graduate student with Graybiel who is now a postdoc in the Department of Neurobiology at Northwestern University. “Each probe measures the concentration of extracellular dopamine within a tiny volume of brain tissue, and probably reflects the activity of thousands of nerve terminals.”

Gradual increase in dopamine

From previous work, the researchers expected that they might see pulses of dopamine released at different times in the trial, “but in fact we found something much more surprising,” Graybiel says: The level of dopamine increased steadily throughout each trial, peaking as the animal approached its goal — as if in anticipation of a reward.

The rats’ behavior varied from trial to trial — some runs were faster than others, and sometimes the animals would stop briefly — but the dopamine signal did not vary with running speed or trial duration. Nor did it depend on the probability of getting a reward, something that had been suggested by previous studies.

“Instead, the dopamine signal seems to reflect how far away the rat is from its goal,” Graybiel explains. “The closer it gets, the stronger the signal becomes.” The researchers also found that the size of the signal was related to the size of the expected reward: When rats were trained to anticipate a larger gulp of chocolate milk, the dopamine signal rose more steeply to a higher final concentration.

In some trials the T-shaped maze was extended to a more complex shape, requiring animals to run further and to make extra turns before reaching a reward. During these trials, the dopamine signal ramped up more gradually, eventually reaching the same level as in the shorter maze. “It’s as if the animal were adjusting its expectations, knowing that it had further to go,” Graybiel says.

The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.
The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.

An ‘internal guidance system’

“This means that dopamine levels could be used to help an animal make choices on the way to the goal and to estimate the distance to the goal,” says Terrence Sejnowski of the Salk Institute, a computational neuroscientist who is familiar with the findings but who was not involved with the study. “This ‘internal guidance system’ could also be useful for humans, who also have to make choices along the way to what may be a distant goal.”

One question that Graybiel hopes to examine in future research is how the signal arises within the brain. Rats and other animals form cognitive maps of their spatial environment, with so-called “place cells” that are active when the animal is in a specific location. “As our rats run the maze repeatedly,” she says, “we suspect they learn to associate each point in the maze with its distance from the reward that they experienced on previous runs.”

As for the relevance of this research to humans, Graybiel says, “I’d be shocked if something similar were not happening in our own brains.” It’s known that Parkinson’s patients, in whom dopamine signaling is impaired, often appear to be apathetic, and have difficulty in sustaining motivation to complete a long task. “Maybe that’s because they can’t produce this slow ramping dopamine signal,” Graybiel says.

Patrick Tierney at MIT and Stefan Sandberg at the University of Washington also contributed to the study, which was funded by the National Institutes of Health, the National Parkinson Foundation, the CHDI Foundation, the Sydney family and Mark Gorenberg.

Breaking habits before they start

Our daily routines can become so ingrained that we perform them automatically, such as taking the same route to work every day. Some behaviors, such as smoking or biting your fingernails, become so habitual that we can’t stop even if we want to.

Although breaking habits can be hard, MIT neuroscientists have now shown that they can prevent them from taking root in the first place, in rats learning to run a maze to earn a reward. The researchers first demonstrated that activity in two distinct brain regions is necessary in order for habits to crystallize. Then, they were able to block habits from forming by interfering with activity in one of the brain regions — the infralimbic (IL) cortex, which is located in the prefrontal cortex.

The MIT researchers, led by Institute Professor Ann Graybiel, used a technique called optogenetics to block activity in the IL cortex. This allowed them to control cells of the IL cortex using light. When the cells were turned off during every maze training run, the rats still learned to run the maze correctly, but when the reward was made to taste bad, they stopped, showing that a habit had not formed. If it had, they would keep going back by habit.

“It’s usually so difficult to break a habit,” Graybiel says. “It’s also difficult to have a habit not form when you get a reward for what you’re doing. But with this manipulation, it’s absolutely easy. You just turn the light on, and bingo.”

Graybiel, a member of MIT’s McGovern Institute for Brain Research, is the senior author of a paper describing the findings in the June 27 issue of the journal Neuron. Kyle Smith, a former MIT postdoc who is now an assistant professor at Dartmouth College, is the paper’s lead author.

Patterns of habitual behavior


Previous studies of how habits are formed and controlled have implicated the IL cortex as well as the striatum, a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward. It is believed that the motor patterns needed to execute a habitual behavior are stored in the striatum and its circuits.

Recent studies from Graybiel’s lab have shown that disrupting activity in the IL cortex can block the expression of habits that have already been learned and stored in the striatum. Last year, Smith and Graybiel found that the IL cortex appears to decide which of two previously learned habits will be expressed.

“We have evidence that these two areas are important for habits, but they’re not connected at all, and no one has much of an idea of what the cells are doing as a habit is formed, as the habit is lost, and as a new habit takes over,” Smith says.

To investigate that, Smith recorded activity in cells of the IL cortex as rats learned to run a maze. He found activity patterns very similar to those that appear in the striatum during habit formation. Several years ago, Graybiel found that a distinctive “task-bracketing” pattern develops when habits are formed. This means that the cells are very active when the animal begins its run through the maze, are quiet during the run, and then fire up again when the task is finished.

This kind of pattern “chunks” habits into a large unit that the brain can simply turn on when the habitual behavior is triggered, without having to think about each individual action that goes into the habitual behavior.

The researchers found that this pattern took longer to appear in the IL cortex than in the striatum, and it was also less permanent. Unlike the pattern in the striatum, which remains stored even when a habit is broken, the IL cortex pattern appears and disappears as habits are formed and broken. This was the clue that the IL cortex, not the striatum, was tracking the development of the habit.


Multiple layers of control
 


The researchers’ ability to optogenetically block the formation of new habits suggests that the IL cortex not only exerts real-time control over habits and compulsions, but is also needed for habits to form in the first place.

“The previous idea was that the habits were stored in the sensorimotor system and this cortical area was just selecting the habit to be expressed. Now we think it’s a more fundamental contribution to habits, that the IL cortex is more actively making this happen,” Smith says.

This arrangement offers multiple layers of control over habitual behavior, which could be advantageous in reining in automatic behavior, Graybiel says. It is also possible that the IL cortex is contributing specific pieces of the habitual behavior, in addition to exerting control over whether it occurs, according to the researchers. They are now trying to determine whether the IL cortex and the striatum are communicating with and influencing each other, or simply acting in parallel.

The study suggests a new way to look for abnormal activity that might cause disorders of repetitive behavior, Smith says. Now that the researchers have identified the neural signature of a normal habit, they can look for signs of habitual behavior that is learned too quickly or becomes too rigid. Finding such a signature could allow scientists to develop new ways to treat disorders of repetitive behavior by using deep brain stimulation, which uses electronic impulses delivered by a pacemaker to suppress abnormal brain activity.

The research was funded by the National Institutes of Health, the Office of Naval Research, the Stanley H. and Sheila G. Sydney Fund and funding from R. Pourian and Julia Madadi.

Compulsive no more

By activating a brain circuit that controls compulsive behavior, McGovern neuroscientists have shown that they can block a compulsive behavior in mice — a result that could help researchers develop new treatments for diseases such as obsessive-compulsive disorder (OCD) and Tourette’s syndrome.

About 1 percent of U.S. adults suffer from OCD, and patients usually receive antianxiety drugs or antidepressants, behavioral therapy, or a combination of therapy and medication. For those who do not respond to those treatments, a new alternative is deep brain stimulation, which delivers electrical impulses via a pacemaker implanted in the brain.

For this study, the MIT team used optogenetics to control neuron activity with light. This technique is not yet ready for use in human patients, but studies such as this one could help researchers identify brain activity patterns that signal the onset of compulsive behavior, allowing them to more precisely time the delivery of deep brain stimulation.

“You don’t have to stimulate all the time. You can do it in a very nuanced way,” says Ann Graybiel, an Institute Professor at MIT, a member of MIT’s McGovern Institute for Brain Research and the senior author of a Science paper describing the study.

The paper’s lead author is Eric Burguière, a former postdoc in Graybiel’s lab who is now at the Brain and Spine Institute in Paris. Other authors are Patricia Monteiro, a research affiliate at the McGovern Institute, and Guoping Feng, the James W. and Patricia T. Poitras Professor of Brain and Cognitive Sciences and a member of the McGovern Institute.

Controlling compulsion

In earlier studies, Graybiel has focused on how to break normal habits; in the current work, she turned to a mouse model developed by Feng to try to block a compulsive behavior. The model mice lack a particular gene, known as Sapap3, that codes for a protein found in the synapses of neurons in the striatum — a part of the brain related to addiction and repetitive behavioral problems, as well as normal functions such as decision-making, planning and response to reward.

For this study, the researchers trained mice whose Sapap3 gene was knocked out to groom compulsively at a specific time, allowing the researchers to try to interrupt the compulsion. To do this, they used a Pavlovian conditioning strategy in which a neutral event (a tone) is paired with a stimulus that provokes the desired behavior — in this case, a drop of water on the mouse’s nose, which triggers the mouse to groom. This strategy was based on therapeutic work with OCD patients, which uses this kind of conditioning.

After several hundred trials, both normal and knockout mice became conditioned to groom upon hearing the tone, which always occurred just over a second before the water drop fell. However, after a certain point their behaviors diverged: The normal mice began waiting until just before the water drop fell to begin grooming. This type of behavior is known as optimization, because it prevents the mice from wasting unnecessary effort.

This behavior optimization never appeared in the knockout mice, which continued to groom as soon as they heard the tone, suggesting that their ability to suppress compulsive behavior was impaired.

The researchers suspected that failed communication between the striatum, which is related to habits, and the neocortex, the seat of higher functions that can override simpler behaviors, might be to blame for the mice’s compulsive behavior. To test this idea, they used optogenetics, which allows them to control cell activity with light by engineering cells to express light-sensitive proteins.

When the researchers stimulated light-sensitive cortical cells that send messages to the striatum at the same time that the tone went off, the knockout mice stopped their compulsive grooming almost totally, yet they could still groom when the water drop came. The researchers suggest that this cure resulted from signals sent from the cortical neurons to a very small group of inhibitory neurons in the striatum, which silence the activity of neighboring striatal cells and cut off the compulsive behavior.

“Through the activation of this pathway, we could elicit behavior inhibition, which appears to be dysfunctional in our animals,” Burguière says.

The researchers also tested the optogenetic intervention in mice as they groomed in their cages, with no conditioning cues. During three-minute periods of light stimulation, the knockout mice groomed much less than they did without the stimulation.

Scott Rauch, president and psychiatrist-in-chief of McLean Hospital in Belmont, Mass., says the MIT study “opens the door to a universe of new possibilities by identifying a cellular and circuitry target for future interventions.”

“This represents a major leap forward, both in terms of delineating the brain basis of pathological compulsive behavior and in offering potential avenues for new treatment approaches,” adds Rauch, who was not involved in this study.

Graybiel and Burguière are now seeking markers of brain activity that could reveal when a compulsive behavior is about to start, to help guide the further development of deep brain stimulation treatments for OCD patients.

The research was funded by the Simons Initiative on Autism and the Brain at MIT, the National Institute of Child Health and Human Development, the National Institute of Mental Health, and the Simons Foundation Autism Research Initiative.

Obama hosts Dresselhaus, Graybiel and Luu in Oval Office

President Barack Obama met Thursday, March 28, in the Oval Office with the six U.S. recipients of the 2012 Kavli Prizes — including MIT’s Mildred S. Dresselhaus, Ann M. Graybiel and Jane X. Luu. Obama and his science and technology advisor, John P. Holdren, received the scientists to recognize their landmark contributions in nanoscience, neuroscience and astrophysics, respectively. [watch video]

“American scientists, engineers and innovators strengthen our nation every day and in countless ways, but the all-stars honored by the Kavli Foundation deserve special praise for the scale of their advances in some of the most important and exciting research disciplines today,” said Holdren, who also serves as director of the White House Office of Science and Technology Policy. “I am grateful not only for their profound accomplishments, but for the inspiration they are providing to a new generation of doers, makers and discoverers.”

The researchers received their Kavli Prizes for making fundamental contributions to our understanding of the outer solar system; of the differences in material properties at nano- and larger scales; and of how the brain receives and responds to sensations such as sight, sound and touch.

The 2012 Kavli Prize in Astrophysics was awarded to Luu, David C. Jewitt of the University of California at Los Angeles, and Michael E. Brown of the California Institute of Technology for discovering and characterizing the Kuiper Belt and its largest members, work that led to a major advance in the understanding of the history of our planetary system. The Kuiper Belt lies beyond the orbit of Neptune and is a disk of more than 70,000 small bodies made of rock and ice, and orbiting the sun. Jewitt and Luu discovered the Kuiper Belt, and Brown discovered and characterized many of its largest members.

The 2012 Kavli Prize in Nanoscience was awarded to Dresselhaus for her work explaining why the properties of materials structured at the nanoscale can vary so much from those of the same materials at larger dimensions. Her early work provided the foundation for later discoveries concerning the famous C60 buckyball, carbon nanotubes and graphene. Dresselhaus received the Kavli Prize for her research into uniform oscillations of elastic arrangements of atoms or molecules called phonons; phonon-electron interactions; and heat conductivity in nanostructures.

The 2012 Kavli Prize in Neuroscience was awarded to Graybiel, Cornelia Isabella Bargmann of Rockefeller University, and Winfried Denk of the Max Planck Institute for Medical Research, who have pioneered the study of how sensory signals pass from the point of sensation — whether the eye, the foot or the nose — to the brain, and how decisions are made to respond. Each working on different parts of the brain, and using different techniques and models, they have combined precise neuroanatomy with sophisticated functional studies to gain understanding of their chosen systems.