Nancy Kanwisher

Architecture of the Mind

What is the nature of the human mind? Philosophers have debated this question for centuries, but Nancy Kanwisher approaches this question empirically, using brain imaging to look for components of the human mind that reside in particular regions of the brain. Her lab has identified cortical regions that are selectively engaged in the perception of faces, places, and bodies, and other regions specialized for uniquely human functions including the music, language, and thinking about other people’s thoughts. More recently, her lab has begun using artificial neural networks to unpack these findings and examine why, from a computational standpoint, the brain exhibits functional specification in the first place.

John Gabrieli

Images of Mind

John Gabrieli’s goal is to understand the organization of memory, thought, and emotion in the human brain. In collaboration with clinical colleagues, Gabrieli uses brain imaging to better understand, diagnose, and select treatments for neurological and psychiatric diseases.

A major focus of the Gabrieli lab is the neural basis of learning in children. His team found structural differences in the brains of young children who are at risk for reading difficulties, even before they start learning to read. By studying these differences in children, Gabrieli hopes to identify ways to improve learning in the classroom and inform effective educational policies and practices.

Gabrieli is also interested in using the tools of neuroscience to personalize medicine. His team showed that brain scans can identify children who are vulnerable to depression before symptoms even appear, opening the possibility of earlier interventions to prevent episodes of depression. Brain scans may also help help predict which individuals with social anxiety disorder are most likely to benefit from a particular therapeutic intervention. Gabrieli’s team continues to explore the role of neuroimaging in other brain disorders, including schizophrenia, addiction, and bipolar disorder.

His team also studies a range of other research topics, including new strategies to cope with emotional stress, the benefits of mindfulness for academic performance and mental health, and the value of embracing neurodiversity to better understand autism.

James DiCarlo

Rapid Recognition

DiCarlo’s research goal is to reverse engineer the brain mechanisms that underlie human visual intelligence. He and his collaborators have revealed how population image transformations carried out by a deep stack of interconnected neocortical brain areas — called the primate ventral visual stream — are effortlessly able to extract object identity from visual images. His team uses a combination of large-scale neurophysiology, brain imaging, direct neural perturbation methods, and machine learning methods to build and test neurally-mechanistic computational models of the ventral visual stream and its support of cognition and behavior. Such an engineering-based understanding is likely to lead to new artificial vision and artificial intelligence approaches, new brain-machine interfaces to restore or augment lost senses, and a new foundation to ameliorate disorders of the mind.

Brain activity pattern may be early sign of schizophrenia

Schizophrenia, a brain disorder that produces hallucinations, delusions, and cognitive impairments, usually strikes during adolescence or young adulthood. While some signs can suggest that a person is at high risk for developing the disorder, there is no way to definitively diagnose it until the first psychotic episode occurs.

MIT neuroscientists working with researchers at Beth Israel Deaconess Medical Center, Brigham and Women’s Hospital, and the Shanghai Mental Health Center have now identified a pattern of brain activity correlated with development of schizophrenia, which they say could be used as a marker to diagnose the disease earlier.

“You can consider this pattern to be a risk factor. If we use these types of brain measurements, then maybe we can predict a little bit better who will end up developing psychosis, and that may also help tailor interventions,” says Guusje Collin, a visiting scientist at MIT’s McGovern Institute for Brain Research and the lead author of the paper.

The study, which appeared in the journal Molecular Psychiatry on Nov. 8, was performed at the Shanghai Mental Health Center. Susan Whitfield-Gabrieli, a visiting scientist at the McGovern Institute and a professor of psychology at Northeastern University, is one of the principal investigators for the study, along with Jijun Wang of the Shanghai Mental Health Center, William Stone of Beth Israel Deaconess Medical Center, the late Larry Seidman of Beth Israel Deaconess Medical Center, and Martha Shenton of Brigham and Women’s Hospital.

Abnormal connections

Before they experience a psychotic episode, characterized by sudden changes in behavior and a loss of touch with reality, patients can experience milder symptoms such as disordered thinking. This kind of thinking can lead to behaviors such as jumping from topic to topic at random, or giving answers unrelated to the original question. Previous studies have shown that about 25 percent of people who experience these early symptoms go on to develop schizophrenia.

The research team performed the study at the Shanghai Mental Health Center because the huge volume of patients who visit the hospital annually gave them a large enough sample of people at high risk of developing schizophrenia.

The researchers followed 158 people between the ages of 13 and 34 who were identified as high-risk because they had experienced early symptoms. The team also included 93 control subjects, who did not have any risk factors. At the beginning of the study, the researchers used functional magnetic resonance imaging (fMRI) to measure a type of brain activity involving “resting state networks.” Resting state networks consist of brain regions that preferentially connect with and communicate with each other when the brain is not performing any particular cognitive task.

“We were interested in looking at the intrinsic functional architecture of the brain to see if we could detect early aberrant brain connectivity or networks in individuals who are in the clinically high-risk phase of the disorder,” Whitfield-Gabrieli says.

One year after the initial scans, 23 of the high-risk patients had experienced a psychotic episode and were diagnosed with schizophrenia. In those patients’ scans, taken before their diagnosis, the researchers found a distinctive pattern of activity that was different from the healthy control subjects and the at-risk subjects who had not developed psychosis.

For example, in most people, a part of the brain known as the superior temporal gyrus, which is involved in auditory processing, is highly connected to brain regions involved in sensory perception and motor control. However, in patients who developed psychosis, the superior temporal gyrus became more connected to limbic regions, which are involved in processing emotions. This could help explain why patients with schizophrenia usually experience auditory hallucinations, the researchers say.

Meanwhile, the high-risk subjects who did not develop psychosis showed network connectivity nearly identical to that of the healthy subjects.

Early intervention

This type of distinctive brain activity could be useful as an early indicator of schizophrenia, especially since it is possible that it could be seen in even younger patients. The researchers are now performing similar studies with younger at-risk populations, including children with a family history of schizophrenia.

“That really gets at the heart of how we can translate this clinically, because we can get in earlier and earlier to identify aberrant networks in the hopes that we can do earlier interventions, and possibly even prevent psychiatric disorders,” Whitfield-Gabrieli says.

She and her colleagues are now testing early interventions that could help to combat the symptoms of schizophrenia, including cognitive behavioral therapy and neural feedback. The neural feedback approach involves training patients to use mindfulness meditation to reduce activity in the superior temporal gyrus, which tends to increase before and during auditory hallucinations.

The researchers also plan to continue following the patients in the current study, and they are now analyzing some additional data on the white matter connections in the brains of these patients, to see if those connections might yield additional differences that could also serve as early indicators of disease.

The research was funded by the National Institutes of Health, the Ministry of Science and Technology of China, and the Poitras Center for Psychiatric Disorders Research at MIT. Collin was supported by a Marie Curie Global Fellowship grant from the European Commission.

Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Tracking down changes in ADHD

Attention deficit hyperactivity disorder (ADHD) is marked by difficulty maintaining focus on tasks, and increased activity and impulsivity. These symptoms ultimately interfere with the ability to learn and function in daily tasks, but the source of the problem could lie at different levels of brain function, and it is hard to parse out exactly what is going wrong.

A new study co-authored by McGovern Institute Associate Investigator Michael Halassa has managed to develop tasks that dissociate lower from higher level brain functions so that disruption to these processes can be more specifically checked in ADHD. The results of this study, carried out in collaboration with co-corresponding authors Wei Ji Ma, Andra Mihali and researchers from New York University, illuminate how brain function is disrupted in ADHD, and highlights a role for perceptual deficits in this condition.

The underlying deficit in ADHD has largely been attributed to executive function — higher order processing and the ability of the brain to integrate information and focus attention. But there have been some hints, largely through reports from those with ADHD, that the very ability to accurately receive sensory information, might be altered. Some people with ADHD, for example, have reported impaired visual function and even changes in color processing. Cleanly separating these perceptual brain functions from the impact of higher order cognitive processes has proven difficult, however. It is not clear whether people with and without ADHD encode visual signals received by the eye in the same way.

“We realized that psychiatric diagnoses in general are based on clinical criteria and patient self-reporting,” says Halassa, who is also a board certified psychiatrist and an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “Psychiatric diagnoses are imprecise, but neurobiology is progressing to the point where we can use well-controlled parameters to standardize criteria, and relate disorders to circuits,” he explains. “If there are problems with attention, is it the spotlight of attention itself that’s affected in ADHD, or the ability of a person to control where this spotlight is focused?”

To test how people with and without ADHD encode visual signals in the brain, Halassa, Ma, Mihali, and collaborators devised a perceptual encoding task in which subjects were asked to provide answers to simple questions about the orientation and color of lines and shapes on a screen. The simplicity of this test aimed to remove high-level cognitive input and provide a measure of accurate perceptual coding.

To measure higher-level executive function, the researchers provided subjects with rules about which features and screen areas were relevant to the task, and they switched relevance throughout the test. They monitored whether subjects cognitively adapted to the switch in rules – an indication of higher-order brain function. The authors also analyzed psychometric curve parameters, common in psychophysics, but not yet applied to ADHD.

“These psychometric parameters give us specific information about the parts of sensory processing that are being affected,” explains Halassa. “So, if you were to put on sunglasses, that would shift threshold, indicating that input is being affected, but this wouldn’t necessarily affect the slope of the psychometric function. If the slope is affected, this starts to reflect difficulty in seeing a line or color. In other words, these tests give us a finer readout of behavior, and how to map this onto particular circuits.”

The authors found that changes in visual perception were robustly associated with ADHD, and these changes were also correlated with cognitive function. Individuals with more clinically severe ADHD scored lower on executive function, and basic perception also tracked with these clinical records of disease severity. The authors could even sort ADHD from control subjects, based on their perceptual variability alone. All of this goes to say that changes in perception itself are clearly present in this ADHD cohort, and that they decline alongside changes in executive function.

“This was unexpected,” points out Halassa. “We didn’t expect so much to be explained by lower sensitivity to stimuli, and to see that these tasks become harder as cognitive pressure increases. It wasn’t clear that cognitive circuits might influence processing of stimuli.”

Understanding the true basis of changes in behavior in disorders such as ADHD can be hard to tease apart, but the study gives more insight into changes in the ADHD brain, and supports the idea that quantitative follow up on self-reporting by patients can drive a stronger understanding — and possible targeted treatment — of such disorders. Testing a larger number of ADHD patients and validating these measures on a larger scale is now the next research priority.

Why do I talk with my hands?

This is a very interesting question sent to us by Gabriel Castellanos (thank you!) Many of us gesture with our hands when we speak (and even when we do not) as a form of non-verbal communication. How hand gestures are coordinated with speech remains unclear. In part, it is difficult to monitor natural hand gestures in fMRI-based brain imaging studies as you have to stay still.

“Performing hand movements when stuck in the bore of a scanner is really tough beyond simple signing and keypresses,” explains McGovern Principal Research Scientist Satrajit Ghosh. “Thus ecological experiments of co-speech with motor gestures have not been carried out in the context of a magnetic resonance scanner, and therefore little is known about language and motor integration within this context.”

There have been studies that use proxies such as co-verbal pushing of buttons, and also studies using other imaging techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to monitor brain activity during gesturing, but it would be difficult to precisely spatially localize the regions involved in natural co-speech hand gesticulation using such approaches. Another possible avenue for addressing this question would be to look at patients with conditions that might implicate particular brain regions in coordinating hand gestures, but such approaches have not really pinpointed a pathway for coordinating speech and hand movements.

That said, co-speech hand gesturing plays an important role in communication. “More generally co-speech hand gestures are seen as a mechanism for emphasis and disambiguation of the semantics of a sentence, in addition to prosody and facial queues,” says Ghosh. “In fact, one may consider the act of speaking as one large orchestral score involving vocal tract movement, respiration, voicing, facial expression, hand gestures, and even whole body postures acting as different instruments coordinated dynamically by the brain. Based on our current understanding of language production, co-speech or gestural events would likely be planned at a higher level than articulation and therefore would likely activate inferior frontal gyrus, SMA, and others.”

How this orchestra is coordinated and conducted thus remains to be unraveled, but certainly the question is one that gets to the heart of human social interactions.

Do you have a question for The Brain? Ask it here.

A social side to face recognition by infants

When interacting with an infant you have likely noticed that the human face holds a special draw from a very young age. But how does this relate to face recognition by adults, which is known to map to specific cortical regions? Rebecca Saxe, Associate Investigator at MIT’s McGovern Institute and John W. Jarve (1978) Professor in Brain and Cognitive Sciences, and her team have now considered two emerging theories regarding early face recognition, and come up with a third proposition, arguing that when a baby looks at a face, the response is also social, and that the resulting contingent interactions are key to subsequent development of organized face recognition areas in the brain.

By a certain age you are highly skilled at recognizing and responding to faces, and this correlates with activation of a number of face-selective regions of the cortex. This is incredibly important to reading the identities and intentions of other people, and selective categorical representation of faces in cortical areas is a feature shared by our primate cousins. While brain imaging tells us where face-responsive regions are in the adult cortex, how and when they emerge remains unclear.

In 2017, functional magnetic resonance imaging (fMRI) studies of human and macaque infants provided the first glimpse of how the youngest brains respond to faces. The scans showed that in 4-6 month human infants and equivalently aged macaques, regions known to be face-responsive in the adult brain are activated when shown movies of faces, but not in a selective fashion. Essentially fMRI argues that these specific, cortical regions are activated by faces, but a chair will do just as well. Upon further experience of faces over time, the specific cortical regions in macaques became face-selective, no longer responding to other objects.

There are two prevailing ideas in the field of how face preference, and eventually selectivity, arise through experience. These ideas are now considered in turn by Saxe and her team in an opinion piece in the September issue of Trends in Cognitive Sciences, and then a third, new theory proposed. The first idea centers on the way we dote over babies, centering our own faces right in their field of vision. The idea is that such frequent exposures to low level face features (curvilinear shape etc.) will eventually lead to co-activation of neurons that are responsive to all of the different aspects of facial features. If these neurons stimulated by different features are co-activated, and there’s a brain region where these neurons are also found together, this area with be stimulated eventually reinforcing emergence of a face category-specific area.

A second idea is that babies already have an innate “face template,” just as a duckling or chick already knows to follow its mother after hatching. So far there is little evidence for the second proposition, and the first fails to explain why babies seek out a face, rather than passively look upon and eventually “learn” the overlapping features that represent “face.”

Saxe, along with postdoc Lindsey Powell and graduate student Heather Kosakowski, instead now argue that the role a face plays in positive social interactions comes to drive organization of face-selective cortical regions. Taking the next step, the researchers propose that a prime suspect for linking social interactions to the development of face-selective areas is the medial prefrontal cortex (mPFC), a region linked to social cognition and behavior.

“I was asked to give a talk at a conference, and I wanted to talk about both the development of cortical face areas and the social role of the medial prefrontal cortex in young infants,” says Saxe. “I was puzzling over whether these two ideas were related, when I suddenly saw that they could be very fundamentally related.”

The authors argue that this relationship is supported by existing data that has shown that babies prefer dynamic faces and are more interested in faces that engage in a back and forth interaction. Regions of the mPFC are also known to activated during social interactions and known to be activated during exposure to dynamic faces in infants.

Powell is now using functional near infrared spectroscopy (fNIRS), a brain imaging technique that measures changes in blood flow to the brain, to test this hypothesis in infants. “This will allow us to see whether mPFC responses to social cues are linked to the development of face-responsive areas.”

In Daniel Deronda, the novel by George Eliot, the protagonist says “I think my life began with waking up and loving my mother’s face: it was so near to me, and her arms were round me, and she sang to me.” Perhaps this type of positively valenced social interaction, reinforced by the mPFC, is exactly what leads to the particular importance of faces and their selective categorical representation in the human brain. Further testing of the hypothesis proposed by Powell, Kosakowski, and Saxe will tell.

Neuroscientists get at the roots of pessimism

Many patients with neuropsychiatric disorders such as anxiety or depression experience negative moods that lead them to focus on the possible downside of a given situation more than the potential benefit.

MIT neuroscientists have now pinpointed a brain region that can generate this type of pessimistic mood. In tests in animals, they showed that stimulating this region, known as the caudate nucleus, induced animals to make more negative decisions: They gave far more weight to the anticipated drawback of a situation than its benefit, compared to when the region was not stimulated. This pessimistic decision-making could continue through the day after the original stimulation.

The findings could help scientists better understand how some of the crippling effects of depression and anxiety arise, and guide them in developing new treatments.

“We feel we were seeing a proxy for anxiety, or depression, or some mix of the two,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study, which appears in the Aug. 9 issue of Neuron. “These psychiatric problems are still so very difficult to treat for many individuals suffering from them.”

The paper’s lead authors are McGovern Institute research affiliates Ken-ichi Amemori and Satoko Amemori, who perfected the tasks and have been studying emotion and how it is controlled by the brain. McGovern Institute researcher Daniel Gibson, an expert in data analysis, is also an author of the paper.

Emotional decisions

Graybiel’s laboratory has previously identified a neural circuit that underlies a specific kind of decision-making known as approach-avoidance conflict. These types of decisions, which require weighing options with both positive and negative elements, tend to provoke a great deal of anxiety. Her lab has also shown that chronic stress dramatically affects this kind of decision-making: More stress usually leads animals to choose high-risk, high-payoff options.

In the new study, the researchers wanted to see if they could reproduce an effect that is often seen in people with depression, anxiety, or obsessive-compulsive disorder. These patients tend to engage in ritualistic behaviors designed to combat negative thoughts, and to place more weight on the potential negative outcome of a given situation. This kind of negative thinking, the researchers suspected, could influence approach-avoidance decision-making.

To test this hypothesis, the researchers stimulated the caudate nucleus, a brain region linked to emotional decision-making, with a small electrical current as animals were offered a reward (juice) paired with an unpleasant stimulus (a puff of air to the face). In each trial, the ratio of reward to aversive stimuli was different, and the animals could choose whether to accept or not.

This kind of decision-making requires cost-benefit analysis. If the reward is high enough to balance out the puff of air, the animals will choose to accept it, but when that ratio is too low, they reject it. When the researchers stimulated the caudate nucleus, the cost-benefit calculation became skewed, and the animals began to avoid combinations that they previously would have accepted. This continued even after the stimulation ended, and could also be seen the following day, after which point it gradually disappeared.

This result suggests that the animals began to devalue the reward that they previously wanted, and focused more on the cost of the aversive stimulus. “This state we’ve mimicked has an overestimation of cost relative to benefit,” Graybiel says.

The study provides valuable insight into the role of the basal ganglia (a region that includes the caudate nucleus) in this type of decision-making, says Scott Grafton, a professor of neuroscience at the University of California at Santa Barbara, who was not involved in the research.

“We know that the frontal cortex and the basal ganglia are involved, but the relative contributions of the basal ganglia have not been well understood,” Grafton says. “This is a nice paper because it puts some of the decision-making process in the basal ganglia as well.”

A delicate balance

The researchers also found that brainwave activity in the caudate nucleus was altered when decision-making patterns changed. This change, discovered by Amemori, is in the beta frequency and might serve as a biomarker to monitor whether animals or patients respond to drug treatment, Graybiel says.

Graybiel is now working with psychiatrists at McLean Hospital to study patients who suffer from depression and anxiety, to see if their brains show abnormal activity in the neocortex and caudate nucleus during approach-avoidance decision-making. Magnetic resonance imaging (MRI) studies have shown abnormal activity in two regions of the medial prefrontal cortex that connect with the caudate nucleus.

The caudate nucleus has within it regions that are connected with the limbic system, which regulates mood, and it sends input to motor areas of the brain as well as dopamine-producing regions. Graybiel and Amemori believe that the abnormal activity seen in the caudate nucleus in this study could be somehow disrupting dopamine activity.

“There must be many circuits involved,” she says. “But apparently we are so delicately balanced that just throwing the system off a little bit can rapidly change behavior.”

The research was funded by the National Institutes of Health, the CHDI Foundation, the U.S. Office of Naval Research, the U.S. Army Research Office, MEXT KAKENHI, the Simons Center for the Social Brain, the Naito Foundation, the Uehara Memorial Foundation, Robert Buxton, Amy Sommer, and Judy Goldberg.

Charting the cerebellum

Small and tucked away under the cerebral hemispheres toward the back of the brain, the human cerebellum is still immediately obvious due to its distinct structure. From Galen’s second century anatomical description to Cajal’s systematic analysis of its projections, the cerebellum has long drawn the eyes of researchers studying the brain.  Two parallel studies from MIT’s McGovern institute have recently converged to support an unexpectedly complex level of non-motor cerebellar organization, that would not have been predicted from known motor representation regions.

Historically the cerebellum has primarily been considered to impact motor control and coordination. Think of this view as the cerebellum being the chain on a bicycle, registering what is happening up front in the cortex, and relaying the information so that the back wheel moves at a coordinated pace. This simple view has been questioned as cerebellar circuits have been traced to the basal ganglia and to neocortical regions via the thalamus. This new view suggests the cerebellum is a hub in a complex network, with potentially higher and non-motor functions including cognition and reward-based learning.

A collaboration between the labs of John Gabrieli, Investigator at the McGovern Institute for Brain Research and Jeremy Schmahmann, of the Ataxia Unit at Massachusetts General Hospital and Harvard Medical School, has now used functional brain imaging to give new insight into the cerebellar organization of non-motor roles, including working memory, language, and, social and emotional processing. In a complementary paper, a collaboration between Sheeba Anteraper of MIT’s Martinos Imaging Center and Gagan Joshi of the Alan and Lorraine Bressler Clinical and Research Program at Massachusetts General Hospital, has found changes in connectivity that occur in the cerebellum in autism spectrum disorder (ASD).

A more complex map of the cerebellum

Published in NeuroImage, and featured on the cover, the first study was led by author Xavier Guell, a postdoc in the Gabrieli and Schmahmann labs. The authors used fMRI data from the Human Connectome Project to examine activity in different regions of the cerebellum during specific tasks and at rest. The tasks used extended beyond motor activity to functions recently linked to the cerebellum, including working memory, language, and social and emotional processing. As expected, the authors saw that two regions assigned by other methods to motor activity were clearly modulated during motor tasks.

“Neuroscientists in the 1940s and 1950s described a double representation of motor function in the cerebellum, meaning that two regions in each hemisphere of the cerebellum are engaged in motor control,” explains Guell. “That there are two areas of motor representation in the cerebellum remains one of the most well-established facts of cerebellar macroscale physiology.”

When it came to assigning non-motor tasks, to their surprise, the authors identified three representations that localized to different regions of the cerebellum, pointing to an unexpectedly complex level of organization.

Guell explains the implications further. “Our study supports the intriguing idea that while two parts of the cerebellum are simultaneously engaged in motor tasks, three other parts of the cerebellum are simultaneously engaged in non-motor tasks. Our predecessors coined the term “double motor representation,” and we may now have to add “triple non-motor representation” to the dictionary of cerebellar neuroscience.”

A serendipitous discussion

What happened next, over a discussion of data between Xavier Guell and Sheeba Arnold Anteraper of the McGovern Institute for Brain Research that culminated in a paper led by Anteraper, illustrates how independent strands can meet and reinforce to give a fuller scientific picture.

The findings by Guell and colleagues made the cover of NeuroImage.
The findings by Guell and colleagues made the cover of NeuroImage.

Anteraper and colleagues examined brain images from high-functioning ASD patients, and looked for statistically-significant patterns, letting the data speak rather than focusing on specific ‘candidate’ regions of the brain. To her surprise, networks related to language were highlighted, as well as the cerebellum, regions that had not been linked to ASD, and that seemed at first sight not to be relevant. Scientists interested in language processing, immediately pointed her to Guell.

“When I went to meet him,” says Anteraper, “I saw immediately that he had the same research paper that I’d been reading on his desk. As soon as I showed him my results, the data fell into place and made sense.”

After talking with Guell, they realized that the same non-motor cerebellar representations he had seen, were independently being highlighted by the ASD study.

“When we study brain function in neurological or psychiatric diseases we sometimes have a very clear notion of what parts of the brain we should study” explained Guell, ”We instead asked which parts of the brain have the most abnormal patterns of functional connectivity to other brain areas? This analysis gave us a simple, powerful result. Only the cerebellum survived our strict statistical thresholds.”

The authors found decreased connectivity within the cerebellum in the ASD group, but also decreased strength in connectivity between the cerebellum and the social, emotional and language processing regions in the cerebral cortex.

“Our analysis showed that regions of disrupted functional connectivity mapped to each of the three areas of non-motor representation in the cerebellum. It thus seems that the notion of two motor and three non-motor areas of representation in the cerebellum is not only important for understanding how the cerebellum works, but also important for understanding how the cerebellum becomes dysfunctional in neurology and psychiatry.”

Guell says that many questions remain to be answered. Are these abnormalities in the cerebellum reproducible in other datasets of patients diagnosed with ASD? Why is cerebellar function (and dysfunction) organized in a pattern of multiple representations? What is different between each of these representations, and what is their distinct contribution to diseases such as ASD? Future work is now aimed at unraveling these questions.