Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Monitoring electromagnetic signals in the brain with MRI

Researchers commonly study brain function by monitoring two types of electromagnetism — electric fields and light. However, most methods for measuring these phenomena in the brain are very invasive.

MIT engineers have now devised a new technique to detect either electrical activity or optical signals in the brain using a minimally invasive sensor for magnetic resonance imaging (MRI).

MRI is often used to measure changes in blood flow that indirectly represent brain activity, but the MIT team has devised a new type of MRI sensor that can detect tiny electrical currents, as well as light produced by luminescent proteins. (Electrical impulses arise from the brain’s internal communications, and optical signals can be produced by a variety of molecules developed by chemists and bioengineers.)

“MRI offers a way to sense things from the outside of the body in a minimally invasive fashion,” says Aviad Hai, an MIT postdoc and the lead author of the study. “It does not require a wired connection into the brain. We can implant the sensor and just leave it there.”

This kind of sensor could give neuroscientists a spatially accurate way to pinpoint electrical activity in the brain. It can also be used to measure light, and could be adapted to measure chemicals such as glucose, the researchers say.

Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the Oct. 22 issue of Nature Biomedical Engineering. Postdocs Virginia Spanoudaki and Benjamin Bartelle are also authors of the paper.

Detecting electric fields

Jasanoff’s lab has previously developed MRI sensors that can detect calcium and neurotransmitters such as serotonin and dopamine. In this paper, they wanted to expand their approach to detecting biophysical phenomena such as electricity and light. Currently, the most accurate way to monitor electrical activity in the brain is by inserting an electrode, which is very invasive and can cause tissue damage. Electroencephalography (EEG) is a noninvasive way to measure electrical activity in the brain, but this method cannot pinpoint the origin of the activity.

To create a sensor that could detect electromagnetic fields with spatial precision, the researchers realized they could use an electronic device — specifically, a tiny radio antenna.

MRI works by detecting radio waves emitted by the nuclei of hydrogen atoms in water. These signals are usually detected by a large radio antenna within an MRI scanner. For this study, the MIT team shrank the radio antenna down to just a few millimeters in size so that it could be implanted directly into the brain to receive the radio waves generated by water in the brain tissue.

The sensor is initially tuned to the same frequency as the radio waves emitted by the hydrogen atoms. When the sensor picks up an electromagnetic signal from the tissue, its tuning changes and the sensor no longer matches the frequency of the hydrogen atoms. When this happens, a weaker image arises when the sensor is scanned by an external MRI machine.

The researchers demonstrated that the sensors can pick up electrical signals similar to those produced by action potentials (the electrical impulses fired by single neurons), or local field potentials (the sum of electrical currents produced by a group of neurons).

“We showed that these devices are sensitive to biological-scale potentials, on the order of millivolts, which are comparable to what biological tissue generates, especially in the brain,” Jasanoff says.

The researchers performed additional tests in rats to study whether the sensors could pick up signals in living brain tissue. For those experiments, they designed the sensors to detect light emitted by cells engineered to express the protein luciferase.

Normally, luciferase’s exact location cannot be determined when it is deep within the brain or other tissues, so the new sensor offers a way to expand the usefulness of luciferase and more precisely pinpoint the cells that are emitting light, the researchers say. Luciferase is commonly engineered into cells along with another gene of interest, allowing researchers to determine whether the genes have been successfully incorporated by measuring the light produced.

Smaller sensors

One major advantage of this sensor is that it does not need to carry any kind of power supply, because the radio signals that the external MRI scanner emits are enough to power the sensor.

Hai, who will be joining the faculty at the University of Wisconsin at Madison in January, plans to further miniaturize the sensors so that more of them can be injected, enabling the imaging of light or electrical fields over a larger brain area. In this paper, the researchers performed modeling that showed that a 250-micron sensor (a few tenths of a millimeter) should be able to detect electrical activity on the order of 100 millivolts, similar to the amount of current in a neural action potential.

Jasanoff’s lab is interested in using this type of sensor to detect neural signals in the brain, and they envision that it could also be used to monitor electromagnetic phenomena elsewhere in the body, including muscle contractions or cardiac activity.

“If the sensors were on the order of hundreds of microns, which is what the modeling suggests is in the future for this technology, then you could imagine taking a syringe and distributing a whole bunch of them and just leaving them there,” Jasanoff says. “What this would do is provide many local readouts by having sensors distributed all over the tissue.”

The research was funded by the National Institutes of Health.

New sensors track dopamine in the brain for more than a year

Dopamine, a signaling molecule used throughout the brain, plays a major role in regulating our mood, as well as controlling movement. Many disorders, including Parkinson’s disease, depression, and schizophrenia, are linked to dopamine deficiencies.

MIT neuroscientists have now devised a way to measure dopamine in the brain for more than a year, which they believe will help them to learn much more about its role in both healthy and diseased brains.

“Despite all that is known about dopamine as a crucial signaling molecule in the brain, implicated in neurologic and neuropsychiatric conditions as well as our ability to learn, it has been impossible to monitor changes in the online release of dopamine over time periods long enough to relate these to clinical conditions,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and one of the senior authors of the study.

Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research, and Rober Langer, the David H. Koch Institute Professor and a member of the Koch Institute, are also senior authors of the study. MIT postdoc Helen Schwerdt is the lead author of the paper, which appears in the Sept. 12 issue of Communications Biology.

Long-term sensing

Dopamine is one of many neurotransmitters that neurons in the brain use to communicate with each other. Traditional systems for measuring dopamine — carbon electrodes with a shaft diameter of about 100 microns — can only be used reliably for about a day because they produce scar tissue that interferes with the electrodes’ ability to interact with dopamine.

In 2015, the MIT team demonstrated that tiny microfabricated sensors could be used to measure dopamine levels in a part of the brain called the striatum, which contains dopamine-producing cells that are critical for habit formation and reward-reinforced learning.

Because these probes are so small (about 10 microns in diameter), the researchers could implant up to 16 of them to measure dopamine levels in different parts of the striatum. In the new study, the researchers wanted to test whether they could use these sensors for long-term dopamine tracking.

“Our fundamental goal from the very beginning was to make the sensors work over a long period of time and produce accurate readings from day to day,” Schwerdt says. “This is necessary if you want to understand how these signals mediate specific diseases or conditions.”

To develop a sensor that can be accurate over long periods of time, the researchers had to make sure that it would not provoke an immune reaction, to avoid the scar tissue that interferes with the accuracy of the readings.

The MIT team found that their tiny sensors were nearly invisible to the immune system, even over extended periods of time. After the sensors were implanted, populations of microglia (immune cells that respond to short-term damage), and astrocytes, which respond over longer periods, were the same as those in brain tissue that did not have the probes inserted.

In this study, the researchers implanted three to five sensors per animal, about 5 millimeters deep, in the striatum. They took readings every few weeks, after stimulating dopamine release from the brainstem, which travels to the striatum. They found that the measurements remained consistent for up to 393 days.

“This is the first time that anyone’s shown that these sensors work for more than a few months. That gives us a lot of confidence that these kinds of sensors might be feasible for human use someday,” Schwerdt says.

Paul Glimcher, a professor of physiology and neuroscience at New York University, says the new sensors should enable more researchers to perform long-term studies of dopamine, which is essential for studying phenomena such as learning, which occurs over long time periods.

“This is a really solid engineering accomplishment that moves the field forward,” says Glimcher, who was not involved in the research. “This dramatically improves the technology in a way that makes it accessible to a lot of labs.”

Monitoring Parkinson’s

If developed for use in humans, these sensors could be useful for monitoring Parkinson’s patients who receive deep brain stimulation, the researchers say. This treatment involves implanting an electrode that delivers electrical impulses to a structure deep within the brain. Using a sensor to monitor dopamine levels could help doctors deliver the stimulation more selectively, only when it is needed.

The researchers are now looking into adapting the sensors to measure other neurotransmitters in the brain, and to measure electrical signals, which can also be disrupted in Parkinson’s and other diseases.

“Understanding those relationships between chemical and electrical activity will be really important to understanding all of the issues that you see in Parkinson’s,” Schwerdt says.

The research was funded by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the Nancy Lurie Marks Family Foundation, and Dr. Tenley Albright.

Why do I talk with my hands?

This is a very interesting question sent to us by Gabriel Castellanos (thank you!) Many of us gesture with our hands when we speak (and even when we do not) as a form of non-verbal communication. How hand gestures are coordinated with speech remains unclear. In part, it is difficult to monitor natural hand gestures in fMRI-based brain imaging studies as you have to stay still.

“Performing hand movements when stuck in the bore of a scanner is really tough beyond simple signing and keypresses,” explains McGovern Principal Research Scientist Satrajit Ghosh. “Thus ecological experiments of co-speech with motor gestures have not been carried out in the context of a magnetic resonance scanner, and therefore little is known about language and motor integration within this context.”

There have been studies that use proxies such as co-verbal pushing of buttons, and also studies using other imaging techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to monitor brain activity during gesturing, but it would be difficult to precisely spatially localize the regions involved in natural co-speech hand gesticulation using such approaches. Another possible avenue for addressing this question would be to look at patients with conditions that might implicate particular brain regions in coordinating hand gestures, but such approaches have not really pinpointed a pathway for coordinating speech and hand movements.

That said, co-speech hand gesturing plays an important role in communication. “More generally co-speech hand gestures are seen as a mechanism for emphasis and disambiguation of the semantics of a sentence, in addition to prosody and facial queues,” says Ghosh. “In fact, one may consider the act of speaking as one large orchestral score involving vocal tract movement, respiration, voicing, facial expression, hand gestures, and even whole body postures acting as different instruments coordinated dynamically by the brain. Based on our current understanding of language production, co-speech or gestural events would likely be planned at a higher level than articulation and therefore would likely activate inferior frontal gyrus, SMA, and others.”

How this orchestra is coordinated and conducted thus remains to be unraveled, but certainly the question is one that gets to the heart of human social interactions.

Do you have a question for The Brain? Ask it here.

A social side to face recognition by infants

When interacting with an infant you have likely noticed that the human face holds a special draw from a very young age. But how does this relate to face recognition by adults, which is known to map to specific cortical regions? Rebecca Saxe, Associate Investigator at MIT’s McGovern Institute and John W. Jarve (1978) Professor in Brain and Cognitive Sciences, and her team have now considered two emerging theories regarding early face recognition, and come up with a third proposition, arguing that when a baby looks at a face, the response is also social, and that the resulting contingent interactions are key to subsequent development of organized face recognition areas in the brain.

By a certain age you are highly skilled at recognizing and responding to faces, and this correlates with activation of a number of face-selective regions of the cortex. This is incredibly important to reading the identities and intentions of other people, and selective categorical representation of faces in cortical areas is a feature shared by our primate cousins. While brain imaging tells us where face-responsive regions are in the adult cortex, how and when they emerge remains unclear.

In 2017, functional magnetic resonance imaging (fMRI) studies of human and macaque infants provided the first glimpse of how the youngest brains respond to faces. The scans showed that in 4-6 month human infants and equivalently aged macaques, regions known to be face-responsive in the adult brain are activated when shown movies of faces, but not in a selective fashion. Essentially fMRI argues that these specific, cortical regions are activated by faces, but a chair will do just as well. Upon further experience of faces over time, the specific cortical regions in macaques became face-selective, no longer responding to other objects.

There are two prevailing ideas in the field of how face preference, and eventually selectivity, arise through experience. These ideas are now considered in turn by Saxe and her team in an opinion piece in the September issue of Trends in Cognitive Sciences, and then a third, new theory proposed. The first idea centers on the way we dote over babies, centering our own faces right in their field of vision. The idea is that such frequent exposures to low level face features (curvilinear shape etc.) will eventually lead to co-activation of neurons that are responsive to all of the different aspects of facial features. If these neurons stimulated by different features are co-activated, and there’s a brain region where these neurons are also found together, this area with be stimulated eventually reinforcing emergence of a face category-specific area.

A second idea is that babies already have an innate “face template,” just as a duckling or chick already knows to follow its mother after hatching. So far there is little evidence for the second proposition, and the first fails to explain why babies seek out a face, rather than passively look upon and eventually “learn” the overlapping features that represent “face.”

Saxe, along with postdoc Lindsey Powell and graduate student Heather Kosakowski, instead now argue that the role a face plays in positive social interactions comes to drive organization of face-selective cortical regions. Taking the next step, the researchers propose that a prime suspect for linking social interactions to the development of face-selective areas is the medial prefrontal cortex (mPFC), a region linked to social cognition and behavior.

“I was asked to give a talk at a conference, and I wanted to talk about both the development of cortical face areas and the social role of the medial prefrontal cortex in young infants,” says Saxe. “I was puzzling over whether these two ideas were related, when I suddenly saw that they could be very fundamentally related.”

The authors argue that this relationship is supported by existing data that has shown that babies prefer dynamic faces and are more interested in faces that engage in a back and forth interaction. Regions of the mPFC are also known to activated during social interactions and known to be activated during exposure to dynamic faces in infants.

Powell is now using functional near infrared spectroscopy (fNIRS), a brain imaging technique that measures changes in blood flow to the brain, to test this hypothesis in infants. “This will allow us to see whether mPFC responses to social cues are linked to the development of face-responsive areas.”

In Daniel Deronda, the novel by George Eliot, the protagonist says “I think my life began with waking up and loving my mother’s face: it was so near to me, and her arms were round me, and she sang to me.” Perhaps this type of positively valenced social interaction, reinforced by the mPFC, is exactly what leads to the particular importance of faces and their selective categorical representation in the human brain. Further testing of the hypothesis proposed by Powell, Kosakowski, and Saxe will tell.

Neuroscientists get at the roots of pessimism

Many patients with neuropsychiatric disorders such as anxiety or depression experience negative moods that lead them to focus on the possible downside of a given situation more than the potential benefit.

MIT neuroscientists have now pinpointed a brain region that can generate this type of pessimistic mood. In tests in animals, they showed that stimulating this region, known as the caudate nucleus, induced animals to make more negative decisions: They gave far more weight to the anticipated drawback of a situation than its benefit, compared to when the region was not stimulated. This pessimistic decision-making could continue through the day after the original stimulation.

The findings could help scientists better understand how some of the crippling effects of depression and anxiety arise, and guide them in developing new treatments.

“We feel we were seeing a proxy for anxiety, or depression, or some mix of the two,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study, which appears in the Aug. 9 issue of Neuron. “These psychiatric problems are still so very difficult to treat for many individuals suffering from them.”

The paper’s lead authors are McGovern Institute research affiliates Ken-ichi Amemori and Satoko Amemori, who perfected the tasks and have been studying emotion and how it is controlled by the brain. McGovern Institute researcher Daniel Gibson, an expert in data analysis, is also an author of the paper.

Emotional decisions

Graybiel’s laboratory has previously identified a neural circuit that underlies a specific kind of decision-making known as approach-avoidance conflict. These types of decisions, which require weighing options with both positive and negative elements, tend to provoke a great deal of anxiety. Her lab has also shown that chronic stress dramatically affects this kind of decision-making: More stress usually leads animals to choose high-risk, high-payoff options.

In the new study, the researchers wanted to see if they could reproduce an effect that is often seen in people with depression, anxiety, or obsessive-compulsive disorder. These patients tend to engage in ritualistic behaviors designed to combat negative thoughts, and to place more weight on the potential negative outcome of a given situation. This kind of negative thinking, the researchers suspected, could influence approach-avoidance decision-making.

To test this hypothesis, the researchers stimulated the caudate nucleus, a brain region linked to emotional decision-making, with a small electrical current as animals were offered a reward (juice) paired with an unpleasant stimulus (a puff of air to the face). In each trial, the ratio of reward to aversive stimuli was different, and the animals could choose whether to accept or not.

This kind of decision-making requires cost-benefit analysis. If the reward is high enough to balance out the puff of air, the animals will choose to accept it, but when that ratio is too low, they reject it. When the researchers stimulated the caudate nucleus, the cost-benefit calculation became skewed, and the animals began to avoid combinations that they previously would have accepted. This continued even after the stimulation ended, and could also be seen the following day, after which point it gradually disappeared.

This result suggests that the animals began to devalue the reward that they previously wanted, and focused more on the cost of the aversive stimulus. “This state we’ve mimicked has an overestimation of cost relative to benefit,” Graybiel says.

The study provides valuable insight into the role of the basal ganglia (a region that includes the caudate nucleus) in this type of decision-making, says Scott Grafton, a professor of neuroscience at the University of California at Santa Barbara, who was not involved in the research.

“We know that the frontal cortex and the basal ganglia are involved, but the relative contributions of the basal ganglia have not been well understood,” Grafton says. “This is a nice paper because it puts some of the decision-making process in the basal ganglia as well.”

A delicate balance

The researchers also found that brainwave activity in the caudate nucleus was altered when decision-making patterns changed. This change, discovered by Amemori, is in the beta frequency and might serve as a biomarker to monitor whether animals or patients respond to drug treatment, Graybiel says.

Graybiel is now working with psychiatrists at McLean Hospital to study patients who suffer from depression and anxiety, to see if their brains show abnormal activity in the neocortex and caudate nucleus during approach-avoidance decision-making. Magnetic resonance imaging (MRI) studies have shown abnormal activity in two regions of the medial prefrontal cortex that connect with the caudate nucleus.

The caudate nucleus has within it regions that are connected with the limbic system, which regulates mood, and it sends input to motor areas of the brain as well as dopamine-producing regions. Graybiel and Amemori believe that the abnormal activity seen in the caudate nucleus in this study could be somehow disrupting dopamine activity.

“There must be many circuits involved,” she says. “But apparently we are so delicately balanced that just throwing the system off a little bit can rapidly change behavior.”

The research was funded by the National Institutes of Health, the CHDI Foundation, the U.S. Office of Naval Research, the U.S. Army Research Office, MEXT KAKENHI, the Simons Center for the Social Brain, the Naito Foundation, the Uehara Memorial Foundation, Robert Buxton, Amy Sommer, and Judy Goldberg.

Charting the cerebellum

Small and tucked away under the cerebral hemispheres toward the back of the brain, the human cerebellum is still immediately obvious due to its distinct structure. From Galen’s second century anatomical description to Cajal’s systematic analysis of its projections, the cerebellum has long drawn the eyes of researchers studying the brain.  Two parallel studies from MIT’s McGovern institute have recently converged to support an unexpectedly complex level of non-motor cerebellar organization, that would not have been predicted from known motor representation regions.

Historically the cerebellum has primarily been considered to impact motor control and coordination. Think of this view as the cerebellum being the chain on a bicycle, registering what is happening up front in the cortex, and relaying the information so that the back wheel moves at a coordinated pace. This simple view has been questioned as cerebellar circuits have been traced to the basal ganglia and to neocortical regions via the thalamus. This new view suggests the cerebellum is a hub in a complex network, with potentially higher and non-motor functions including cognition and reward-based learning.

A collaboration between the labs of John Gabrieli, Investigator at the McGovern Institute for Brain Research and Jeremy Schmahmann, of the Ataxia Unit at Massachusetts General Hospital and Harvard Medical School, has now used functional brain imaging to give new insight into the cerebellar organization of non-motor roles, including working memory, language, and, social and emotional processing. In a complementary paper, a collaboration between Sheeba Anteraper of MIT’s Martinos Imaging Center and Gagan Joshi of the Alan and Lorraine Bressler Clinical and Research Program at Massachusetts General Hospital, has found changes in connectivity that occur in the cerebellum in autism spectrum disorder (ASD).

A more complex map of the cerebellum

Published in NeuroImage, and featured on the cover, the first study was led by author Xavier Guell, a postdoc in the Gabrieli and Schmahmann labs. The authors used fMRI data from the Human Connectome Project to examine activity in different regions of the cerebellum during specific tasks and at rest. The tasks used extended beyond motor activity to functions recently linked to the cerebellum, including working memory, language, and social and emotional processing. As expected, the authors saw that two regions assigned by other methods to motor activity were clearly modulated during motor tasks.

“Neuroscientists in the 1940s and 1950s described a double representation of motor function in the cerebellum, meaning that two regions in each hemisphere of the cerebellum are engaged in motor control,” explains Guell. “That there are two areas of motor representation in the cerebellum remains one of the most well-established facts of cerebellar macroscale physiology.”

When it came to assigning non-motor tasks, to their surprise, the authors identified three representations that localized to different regions of the cerebellum, pointing to an unexpectedly complex level of organization.

Guell explains the implications further. “Our study supports the intriguing idea that while two parts of the cerebellum are simultaneously engaged in motor tasks, three other parts of the cerebellum are simultaneously engaged in non-motor tasks. Our predecessors coined the term “double motor representation,” and we may now have to add “triple non-motor representation” to the dictionary of cerebellar neuroscience.”

A serendipitous discussion

What happened next, over a discussion of data between Xavier Guell and Sheeba Arnold Anteraper of the McGovern Institute for Brain Research that culminated in a paper led by Anteraper, illustrates how independent strands can meet and reinforce to give a fuller scientific picture.

The findings by Guell and colleagues made the cover of NeuroImage.
The findings by Guell and colleagues made the cover of NeuroImage.

Anteraper and colleagues examined brain images from high-functioning ASD patients, and looked for statistically-significant patterns, letting the data speak rather than focusing on specific ‘candidate’ regions of the brain. To her surprise, networks related to language were highlighted, as well as the cerebellum, regions that had not been linked to ASD, and that seemed at first sight not to be relevant. Scientists interested in language processing, immediately pointed her to Guell.

“When I went to meet him,” says Anteraper, “I saw immediately that he had the same research paper that I’d been reading on his desk. As soon as I showed him my results, the data fell into place and made sense.”

After talking with Guell, they realized that the same non-motor cerebellar representations he had seen, were independently being highlighted by the ASD study.

“When we study brain function in neurological or psychiatric diseases we sometimes have a very clear notion of what parts of the brain we should study” explained Guell, ”We instead asked which parts of the brain have the most abnormal patterns of functional connectivity to other brain areas? This analysis gave us a simple, powerful result. Only the cerebellum survived our strict statistical thresholds.”

The authors found decreased connectivity within the cerebellum in the ASD group, but also decreased strength in connectivity between the cerebellum and the social, emotional and language processing regions in the cerebral cortex.

“Our analysis showed that regions of disrupted functional connectivity mapped to each of the three areas of non-motor representation in the cerebellum. It thus seems that the notion of two motor and three non-motor areas of representation in the cerebellum is not only important for understanding how the cerebellum works, but also important for understanding how the cerebellum becomes dysfunctional in neurology and psychiatry.”

Guell says that many questions remain to be answered. Are these abnormalities in the cerebellum reproducible in other datasets of patients diagnosed with ASD? Why is cerebellar function (and dysfunction) organized in a pattern of multiple representations? What is different between each of these representations, and what is their distinct contribution to diseases such as ASD? Future work is now aimed at unraveling these questions.

The Learning Brain

“There’s a slogan in education,” says McGovern Investigator John Gabrieli. “The first three years are learning to read, and after that you read to learn.”

For John Gabrieli, learning to read represents one of the most important milestones in a child’s life. Except, that is, when a child can’t. Children who cannot learn to read adequately by the first grade have a 90 percent chance of still reading poorly in the fourth grade, and 75 percent odds of struggling in high school. For the estimated 10 percent of schoolchildren with a reading disability, that struggle often comes with a host of other social and emotional challenges: anxiety, damaged self-esteem, increased risk for poverty and eventually, encounters with the criminal justice system.

Most reading interventions focus on classical dyslexia, which is essentially a coding problem—trouble moving letters into sound patterns in the brain. But other factors, such as inadequate vocabulary and lack of practice opportunities, hinder reading too. The diagnosis can be subjective, and for those who are diagnosed, the standard treatments help only some students. “Every teacher knows half to two-thirds have a good response, the other third don’t,” Gabrieli says. “It’s a mystery. And amazingly there’s been almost no progress on that.”

For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education
interventions—a sort of translational medicine for the classroom.

The Home Effect

In 2011, when Julia Leonard was a research assistant in Gabrieli’s lab, she planned to go into pediatrics. But she became drawn to the lab’s education projects and decided to join the lab as
a graduate student to learn more. By 2015, she helped coauthor a landmark study with postdoc Allyson Mackey, that sought neural markers for the academic “achievement gap,” which separates higher socioeconomic status (SES) children from their disadvantaged peers. It was the first study to make a connection between SES-linked differences in brain structure and educational markers. Specifically, they found children from wealthier backgrounds had thicker cortical brain regions, which correlated with better academic achievement.

“Being a doctor is a really awesome and powerful career,” she says. “But I was more curious about the research that could cause bigger changes in children’s lives.”

Leonard collaborated with Rachel Romeo, another graduate student in the Gabrieli lab who wanted to understand the powerful effect of SES on the developing brain. Romeo had a distinctive background in speech pathology and literacy, where she’d observed wealthier students progressing more quickly compared to their disadvantaged peers.

Their research is revealing a fascinating picture. In a 2017 study, Romeo compared how reading-disabled children from low and high SES backgrounds fared after an intensive summer reading intervention. Low SES children in the intervention improved most in their reading, and MRI scans revealed their brains also underwent greater structural changes in response to the intervention. Higher SES children did not appear to change much, either in skill or brain structure.

“In the few studies that have looked at SES effects on treatment outcomes,” Romeo says, “the research suggests that higher SES kids would show the most improvement. We were surprised to
find that this wasn’t true.” She suspects that the midsummer timing of the intervention may account for this. Lower SES kids’ performance often suffer most during a “summer slump,”
and would therefore have the greatest potential to improve from interventions at this time.

However, in another study this year, Leonard uncovered unique brain differences in lower-SES children. Only among lower-SES children was better reasoning ability associated with thicker
cortex in a key part of the brain. Same behavior, different neural signatures.

“So this becomes a really interesting basic science question,” Leonard says. “Does the brain support cognition the same way across everyone, or does it differ based on how you grow up?”

Not a One-Size-Fits-All

Critics of such “educational neuroscience” have highlighted the lack of useful interventions produced by this research. Gabrieli agrees that so far, little has emerged. “The painful thing is the slowness of this work. It’s mind-boggling,” Gabrieli admits. Every intervention requires all the usual human research requirements, plus coordinating with schools, parents, teachers, and so on. “It’s a huge process to do even the smallest intervention,” he explains. Partly because of that, the field is still relatively new.

But he disagrees with the idea that nothing will come from this research. Gabrieli’s lab previously identified neural markers in children who will go on to develop reading disabilities. These markers could even predict who would or would not respond to standard treatments that focus on phonetic letter-sound coding.

Romeo and Leonard’s work suggests that varied etiologies underlie reading disabilities, which may be the key. “For so long people have thought that reading disorders were just a unitary construct: kids are bad at reading, so let’s fix that with a one-size-fits-all treatment,” Romeo says.

Such findings may ultimately help resource-strapped schools target existing phonetic training rather than enrolling all struggling readers in the same program, to see some still fail.

Think Spaces

At the Oliver Hazard Perry School, a public K-8 school located on the South Boston waterfront, teachers like Colleen Labbe have begun to independently navigate similar problems as they try
to reach their own struggling students.

“A lot of times we look at assessments and put students in intervention groups like phonics,” Labbe says. “But it’s important to also ask what is happening for these students on their way to school and at home.”

For Labbe and Perry Principal Geoffrey Rose, brain science has proven transformative. They’ve embraced literature on neuroplasticity—the idea that brains can change if teachers find the right combination of intervention and circumstances, like the low-SES students who benefited in Romeo and Leonard’s study.

“A big myth is that the brain can’t grow and change, and if you can’t reach that student, you pass them off,” Labbe says.

The science has also been empowering to her students, validating their own powers of self-change. “I tell the kids, we’re going to build the goop!” she says, referring to the brain’s ability to make new connections.

“All kids can learn,” Rose agrees. “But the flip of that is, can all kids do school?” His job, he says, is to make sure they can.

The classrooms at Perry are a mix of students from different cultures and socioeconomic backgrounds, so he and Labbe have focused on helping teachers find ways to connect with these children and help them manage their stresses and thus be ready to learn. Teachers here are armed with “scaffolds”—digestible neuro- and cognitive science aids culled from Rose’s postdoctoral studies at Boston College’s Professional School Administrator Program for school leaders. These encourage teachers to be more aware of cultural differences and tendencies in themselves and their students, to better connect.

There are also “Think Spaces” tucked into classroom corners. “Take a deep breath and be calm,” read posters at these soothing stations, which are equipped with de-stressing tools, like squeezable balls, play-dough, and meditation-inspiring sparkle wands. It sounds trivial, yet studies have shown that poverty-linked stressors like food and home insecurity take a toll on emotion and memory-linked brain areas like the amygdala and hippocampus.

In fact, a new study by Clemens Bauer, a postdoc in Gabrieli’s lab, argues that mindfulness training can help calm amygdala hyperactivity, help lower self-perceived stress, and boost attention. His study was conducted with children enrolled in a Boston charter school.

Taking these combined approaches, Labbe says, she’s seen one of her students rise from struggling at the lowest levels of instruction, to thriving by year end. Labbe’s focus on understanding the girl’s stressors, her family environment, and what social and emotional support she really needed was key. “Now she knows she can do it,” Labbe says.

Rose and Labbe only wish they could better bridge the gap between educators like themselves and brain scientists like Gabrieli. To help forge these connections, Rose recently visited Gabrieli’s lab and looks forward to future collaborations. Brain research will provide critical insights into teaching strategy, he says, but the gap is still wide.

From Lab to Classroom

“I’m hugely impressed by principals and teachers who are passionately interested in understanding the brain,” Gabrieli says. Fortunately, new efforts are bridging educators and scientists.

This March, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration
between MIT, the Harvard Graduate School of Education, and Florida State University.

The grant aims to translate some of Gabrieli’s work into more classrooms. Specifically, he hopes to produce better diagnostics that can identify children at risk for dyslexia and other learning
disabilities before they even learn to read.

He hopes to also provide rudimentary diagnostics that identify the source of struggle, be it classic dyslexia, lack of home support, stress, or maybe a combination of factors. That in turn,
could guide treatment—standard phonetic care for some children, versus alternatives: social support akin to Labbe’s efforts, reading practice, or maybe just vocabulary-boosting conversation time with adults.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal for me: to help all children become learners.”

How music lessons can improve language skills

Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing.

A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory.

“The children didn’t differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants. The piano group showed the best improvement there,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and the senior author of the paper.

The study, performed in Beijing, suggests that musical training is at least as beneficial in improving language skills, and possibly more beneficial, than offering children extra reading lessons. The school where the study was performed has continued to offer piano lessons to students, and the researchers hope their findings could encourage other schools to keep or enhance their music offerings.

Yun Nan, an associate professor at Beijing Normal University, is the lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of June 25.

Other authors include Li Liu, Hua Shu, and Qi Dong, all of Beijing Normal University; Eveline Geiser, a former MIT research scientist; Chen-Chen Gong, an MIT research associate; and John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Benefits of music

Previous studies have shown that on average, musicians perform better than nonmusicians on tasks such as reading comprehension, distinguishing speech from background noise, and rapid auditory processing. However, most of these studies have been done by asking people about their past musical training. The MIT researchers wanted to perform a more controlled study in which they could randomly assign children to receive music lessons or not, and then measure the effects.

They decided to perform the study at a school in Beijing, along with researchers from the IDG/McGovern Institute at Beijing Normal University, in part because education officials there were interested in studying the value of music education versus additional reading instruction.

“If children who received music training did as well or better than children who received additional academic instruction, that could a justification for why schools might want to continue to fund music,” Desimone says.

The 74 children participating in the study were divided into three groups: one that received 45-minute piano lessons three times a week; one that received extra reading instruction for the same period of time; and one that received neither intervention. All children were 4 or 5 years old and spoke Mandarin as their native language.

After six months, the researchers tested the children on their ability to discriminate words based on differences in vowels, consonants, or tone (many Mandarin words differ only in tone). Better word discrimination usually corresponds with better phonological awareness — the awareness of the sound structure of words, which is a key component of learning to read.

Children who had piano lessons showed a significant advantage over children in the extra reading group in discriminating between words that differ by one consonant. Children in both the piano group and extra reading group performed better than children who received neither intervention when it came to discriminating words based on vowel differences.

The researchers also used electroencephalography (EEG) to measure brain activity and found that children in the piano group had stronger responses than the other children when they listened to a series of tones of different pitch. This suggest that a greater sensitivity to pitch differences is what helped the children who took piano lessons to better distinguish different words, Desimone says.

“That’s a big thing for kids in learning language: being able to hear the differences between words,” he says. “They really did benefit from that.”

In tests of IQ, attention, and working memory, the researchers did not find any significant differences among the three groups of children, suggesting that the piano lessons did not confer any improvement on overall cognitive function.

Aniruddh Patel, a professor of psychology at Tufts University, says the findings also address the important question of whether purely instrumental musical training can enhance speech processing.

“This study answers the question in the affirmative, with an elegant design that directly compares the effect of music and language instruction on young children. The work specifically relates behavioral improvements in speech perception to the neural impact of musical training, which has both theoretical and real-world significance,” says Patel, who was not involved in the research.

Educational payoff

Desimone says he hopes the findings will help to convince education officials who are considering abandoning music classes in schools not to do so.

“There are positive benefits to piano education in young kids, and it looks like for recognizing differences between sounds including speech sounds, it’s better than extra reading. That means schools could invest in music and there will be generalization to speech sounds,” Desimone says. “It’s not worse than giving extra reading to the kids, which is probably what many schools are tempted to do — get rid of the arts education and just have more reading.”

Desimone now hopes to delve further into the neurological changes caused by music training. One way to do that is to perform EEG tests before and after a single intense music lesson to see how the brain’s activity has been altered.

The research was funded by the National Natural Science Foundation of China, the Beijing Municipal Science and Technology Commission, the Interdiscipline Research Funds of Beijing Normal University, and the Fundamental Research Funds for the Central Universities.

Calcium-based MRI sensor enables more sensitive brain imaging

MIT neuroscientists have developed a new magnetic resonance imaging (MRI) sensor that allows them to monitor neural activity deep within the brain by tracking calcium ions.

Because calcium ions are directly linked to neuronal firing — unlike the changes in blood flow detected by other types of MRI, which provide an indirect signal — this new type of sensing could allow researchers to link specific brain functions to their pattern of neuron activity, and to determine how distant brain regions communicate with each other during particular tasks.

“Concentrations of calcium ions are closely correlated with signaling events in the nervous system,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We designed a probe with a molecular architecture that can sense relatively subtle changes in extracellular calcium that are correlated with neural activity.”

In tests in rats, the researchers showed that their calcium sensor can accurately detect changes in neural activity induced by chemical or electrical stimulation, deep within a part of the brain called the striatum.

MIT research associates Satoshi Okada and Benjamin Bartelle are the lead authors of the study, which appears in the April 30 issue of Nature Nanotechnology. Other authors include professor of brain and cognitive sciences and Picower Institute for Learning and Memory member Mriganka Sur, Research Associate Nan Li, postdoc Vincent Breton-Provencher, former postdoc Elisenda Rodriguez, Wellesley College undergraduate Jiyoung Lee, and high school student James Melican.

Tracking calcium

A mainstay of neuroscience research, MRI allows scientists to identify parts of the brain that are active during particular tasks. The most commonly used type, known as functional MRI, measures blood flow in the brain as an indirect marker of neural activity. Jasanoff and his colleagues wanted to devise a way to map patterns of neural activity with specificity and resolution that blood-flow-based MRI techniques can’t achieve.

“Methods that are able to map brain activity in deep tissue rely on changes in blood flow, and those are coupled to neural activity through many different physiological pathways,” Jasanoff says. “As a result, the signal you see in the end is often difficult to attribute to any particular underlying cause.”

Calcium ion flow, on the other hand, can be directly linked with neuron activity. When a neuron fires an electrical impulse, calcium ions rush into the cell. For about a decade, neuroscientists have been using fluorescent molecules to label calcium in the brain and image it with traditional microscopy. This technique allows them to precisely track neuron activity, but its use is limited to small areas of the brain.

The MIT team set out to find a way to image calcium using MRI, which enables much larger tissue volumes to be analyzed. To do that, they designed a new sensor that can detect subtle changes in calcium concentrations outside of cells and respond in a way that can be detected with MRI.

The new sensor consists of two types of particles that cluster together in the presence of calcium. One is a naturally occurring calcium-binding protein called synaptotagmin, and the other is a magnetic iron oxide nanoparticle coated in a lipid that can also bind to synaptotagmin, but only when calcium is present.

Calcium binding induces these particles to clump together, making them appear darker in an MRI image. High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses.

Detecting brain activity

To test the sensors, the researchers injected them into the striatum of rats, a region that is involved in planning movement and learning new behaviors. They then gave the rats a chemical stimulus that induces short bouts of neural activity, and found that the calcium sensor reflected this activity.

They also found that the sensor picked up activity induced by electrical stimulation in a part of the brain involved in reward.

This approach provides a novel way to examine brain function, says Xin Yu, a research group leader at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany, who was not involved in the research.

“Although we have accumulated sufficient knowledge on intracellular calcium signaling in the past half-century, it has seldom been studied exactly how the dynamic changes in extracellular calcium contribute to brain function, or serve as an indicator of brain function,” Yu says. “When we are deciphering such a complicated and self-adapted system like the brain, every piece of information matters.”

The current version of the sensor responds within a few seconds of the initial brain stimulation, but the researchers are working on speeding that up. They are also trying to modify the sensor so that it can spread throughout a larger region of the brain and pass through the blood-brain barrier, which would make it possible to deliver the particles without injecting them directly to the test site.

With this kind of sensor, Jasanoff hopes to map patterns of neural activity with greater precision than is now possible. “You could imagine measuring calcium activity in different parts of the brain and trying to determine, for instance, how different types of sensory stimuli are encoded in different ways by the spatial pattern of neural activity that they induce,” he says.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.