Is it worth the risk?

During the Klondike Gold Rush, thousands of prospectors climbed Alaska’s dangerous Chilkoot Pass in search of riches. McGovern scientists are exploring how a once-overlooked part of the brain might be at the root of cost-benefit decisions like these. McGovern researchers are studying how the brain balances risk and reward to make decisions.

Is it worth speeding up on the highway to save a few minutes’ time? How about accepting a job that pays more, but requires longer hours in the office?

Scientists call these types of real-life situations cost-benefit conflicts. Choosing well is an essential survival ability—consider the animal that must decide when to expose itself to predation to gather more food.

Now, McGovern researchers are discovering that this fundamental capacity to make decisions may originate in the basal ganglia—a brain region once considered unimportant to the human
experience—and that circuits associated with this structure may play a critical role in determining our state of mind.

Anatomy of decision-making

A few years back, McGovern investigator Ann Graybiel noticed that in the brain imaging literature, a specific part of the cortex called the pregenual anterior cingulate cortex or pACC, was implicated in certain psychiatric disorders as well as tasks involving cost-benefit decisions. Thanks to her now classic neuroanatomical work defining the complex anatomy and function of the basal ganglia, Graybiel knew that the pACC projected back into the basal ganglia—including its largest cluster of neurons, the striatum.

The striatum sits beneath the cortex, with a mouse-like main body and curving tail. It seems to serve as a critical way-station, communicating with both the brain’s sensory and motor areas above, and the limbic system (linked to emotion and memory) below. Running through the striatum are striosomes, column-like neurochemical compartments. They wire down to a small, but important part of the brain called the substantia nigra, which houses the huge majority of the brain’s dopamine neurons—a key neurochemical heavily involved, much like the basal ganglia as a whole, in reward, learning, and movement. The pACC region related to mood control targeted these striosomes, setting up a communication line from the neocortex to the dopamine neurons.

Graybiel discovered these striosomes early in her career, and understood them to have distinct wiring from other compartments in the striatum, but picking out these small, hard-to-find striosomes posed a technological challenge—so it was exciting to have this intriguing link to the pACC and mood disorders.

Working with Ken-ichi Amemori, then a research scientist in her lab, she adapted a common human cost-benefit conflict test for macaque monkeys. The monkeys could elect to receive a food treat, but the treat would always be accompanied by an annoying puff of air to the eyes. Before they decided, a visual cue told them exactly how much treat they could get, and exactly how strong the air puff would be, so they could choose if the treat was worth it.

Normal monkeys varied their choices in a fairly rational manner, rejecting the treat whenever it seemed like the air puff was too strong, or the treat too small to be worth it—and this corresponded with activity in the pACC neurons. Interestingly, they found that some pACC neurons respond more when animals approach the combined offers, while other pACC neurons
fire more when the animals avoid the offers. “It is as though there are two opposing armies. And the one that wins, controls the state of the animal.” Moreover, when Graybiel’s team electrically stimulated these pACC neurons, the animals begin to avoid the offers, even offers that they normally would approach. “It is as though when the stimulation is on, they think the future is worse than it really is,” Graybiel says.

Intriguingly, this effect only worked in situations where the animal had to weigh the value of a cost against a benefit. It had no effect on a decision between two negatives or two positives, like two different sizes of treats. The anxiety drug diazepam also reversed the stimulatory effect, but again, only on cost-benefit choices. “This particular kind of mood-influenced cost-benefit
decision-making occurs not only under conflict conditions but in our regular day to day lives. For example: I know that if I eat too much chocolate, I might get fat, but I love it, I want it.”

Glass half empty

Over the next few years, Graybiel, with another research scientist in her lab, Alexander Friedman, unraveled the circuit behind the macaques’ choices. They adapted the test for rats and mice,
so that they could more easily combine the cellular and molecular technologies needed to study striosomes, such as optogenetics and mouse engineering.

They found that the cortex (specifically, the pre-limbic region of the prefrontal cortex in rodents) wires onto both striosomes and fast-acting interneurons that also target the striosomes. In a
healthy circuit, these interneurons keep the striosomes in check by firing off fast inhibitory signals, hitting the brakes before the striosome can get started. But if the researchers broke that corticalstriatal connection with optogenetics or chronic stress, the animals became reckless, going for the high-risk, high-reward arm of the maze like a gambler throwing caution to the wind. If they amplified this inhibitory interneuron activity, they saw the opposite effect. With these techniques, they could block the effects of prior chronic stress.

This summer, Graybiel and Amemori published another paper furthering the story and returning to macaques. It was still too difficult to hit striosomes, and the researchers could only stimulate the striatum more generally. However, they replicated the effects in past studies.

Many electrodes had no effect, a small number made the monkeys choose the reward more often. Nearly a quarter though made the monkeys more avoidant—and this effect correlated with a change in the macaques’ brainwaves in a manner reminiscent of patients with depression.

But the surprise came when the avoidant-producing stimulation was turned off, the effects lasted unexpectedly long, only returning to normal on the third day.

Graybiel was stunned. “This is very important, because changes in the brain can get set off and have a life of their own,” she says. “This is true for some individuals who have had a terrible experience, and then live with the aftermath, even to the point of suffering from post-traumatic stress disorder.”

She suspects that this persistent state may actually be a form of affect, or mood. “When we change this decision boundary, we’re changing the mood, such that the animal overestimates cost, relative to benefit,” she explains. “This might be like a proxy state for pessimistic decision-making experienced during anxiety and depression, but may also occur, in a milder form, in you and me.”

Graybiel theorizes that this may tie back into the dopamine neurons that the striosomes project to: if this avoidance behavior is akin to avoidance observed in rodents, then they are stimulating a circuit that ultimately projects to dopamine neurons of the substantia nigra. There, she believes, they could act to suppress these dopamine neurons, which in turn project to the rest of the brain, creating some sort of long-term change in their neural activity. Or, put more simply, stimulation of these circuits creates a depressive funk.

Bottom up

Three floors below the Graybiel lab, postdoc Will Menegas is in the early stages of his own work untangling the role of dopamine and the striatum in decision-making. He joined Guoping Feng’s lab this summer after exploring the understudied “tail of the striatum” at Harvard University.

While dopamine pathways influence many parts of the brain, examination of connections to the striatum have largely focused on the frontmost part of the striatum, associated with valuations.

But as Menegas showed while at Harvard, dopamine neurons that project to the rear of the striatum are different. Those neurons get their input from parts of the brain associated with general arousal and sensation—and instead of responding to rewards, they respond to novelty and intense stimuli, like air puffs and loud noises.

In a new study published in Nature Neuroscience, Menegas used a neurotoxin to disrupt the dopamine projection from the substantia nigra to the posterior striatum to see how this circuit influences behavior. Normal mice approach novel items cautiously and back away after sniffing at them, but the mice in Menegas’ study failed to back away. They stopped avoiding a port that gave an air puff to the face and they didn’t behave like normal mice when Menegas dropped a strange or new object—say, a lego—into their cage. Disrupting the nigral-posterior striatum
seemed to turn off their avoidance habit.

“These neurons reinforce avoidance the same way that canonical dopamine neurons reinforce approach,” Menegas explains. It’s a new role for dopamine, suggesting that there may be two different and distinct systems of reinforcement, led by the same neuromodulator in different parts of the striatum.

This research, and Graybiel’s discoveries on cost-benefit decision circuits, share clear parallels, though the precise links between the two phenomena are yet to be fully determined. Menegas plans to extend this line of research into social behavior and related disorders like autism in marmoset monkeys.

“Will wants to learn the methods that we use in our lab to work on marmosets,” Graybiel says. “I think that working together, this could become a wonderful story, because it would involve social interactions.”

“This a very new angle, and it could really change our views of how the reward system works,” Feng says. “And we have very little understanding of social circuits so far and especially in higher organisms, so I think this would be very exciting. Whatever we learn, it’s going to be new.”

Human choices

Based on their preexisting work, Graybiel’s and Menegas’ projects are well-developed—but they are far from the only McGovern-based explorations into ways this brain region taps into our behaviors. Maiya Geddes, a visiting scientist in John Gabrieli’s lab, has recently published a paper exploring the little-known ways that aging affects the dopamine-based nigral-striatum-hippocampus learning and memory systems.

In Rebecca Saxe’s lab, postdoc Livia Tomova just kicked off a new pilot project using brain imaging to uncover dopamine-striatal circuitry behind social craving in humans and the urge to rejoin peers. “Could there be a craving response similar to hunger?” Tomova wonders. “No one has looked yet at the neural mechanisms of this.”

Graybiel also hopes to translate her findings into humans, beginning with collaborations at the Pizzagalli lab at McLean Hospital in Belmont. They are using fMRI to study whether patients
with anxiety and depression show some of the same dysfunctions in the cortico-striatal circuitry that she discovered in her macaques.

If she’s right about tapping into mood states and affect, it would be an expanded role for the striatum—and one with significant potential therapeutic benefits. “Affect state” colors many psychological functions and disorders, from memory and perception, to depression, chronic stress, obsessive-compulsive disorder, and PTSD.

For a region of the brain once dismissed as inconsequential, McGovern researchers have shown the basal ganglia to influence not only our choices but our state of mind—suggesting that this “primitive” brain region may actually be at the heart of the human experience.

 

 

Can the brain recover after paralysis?

Why is it that motor skills can be gained after paralysis but vision cannot recover in similar ways? – Ajay, Puppala

Thank you so much for this very important question, Ajay. To answer, I asked two local experts in the field, Pawan Sinha who runs the vision research lab at MIT, and Xavier Guell, a postdoc in John Gabrieli’s lab at the McGovern Institute who also works in the ataxia unit at Massachusetts General Hospital.

“Simply stated, the prospects of improvement, whether in movement or in vision, depend on the cause of the impairment,” explains Sinha. “Often, the cause of paralysis is stroke, a reduction in blood supply to a localized part of the brain, resulting in tissue damage. Fortunately, the brain has some ability to rewire itself, allowing regions near the damaged one to take on some of the lost functionality. This rewiring manifests itself as improvements in movement abilities after an initial period of paralysis. However, if the paralysis is due to spinal-cord transection (as was the case following Christopher Reeve’s tragic injury in 1995), then prospects for improvement are diminished.”

“Turning to the domain of sight,” continues Sinha, “stroke can indeed cause vision loss. As with movement control, these losses can dissipate over time as the cortex reorganizes via rewiring. However, if the blindness is due to optic nerve transection, then the condition is likely to be permanent. It is also worth noting that many cases of blindness are due to problems in the eye itself. These include corneal opacities, cataracts and retinal damage. Some of these conditions (corneal opacities and cataracts) are eminently treatable while others (typically those associated with the retina and optic nerve) still pose challenges to medical science.”

You might be wondering what makes lesions in the eye and spinal cord hard to overcome. Some systems (the blood, skin, and intestine are good examples) contain a continuously active stem cell population in adults. These cells can divide and replenish lost cells in damaged regions. While “adult-born” neurons can arise, elements of a degenerating or damaged retina, optic nerve, or spinal cord cannot be replaced as easily lost skin cells can. There is currently a very active effort in the stem cell community to understand how we might be able to replace neurons in all cases of neuronal degeneration and injury using stem cell technologies. To further explore lesions that specifically affect the brain, and how these might lead to a different outcome in the two systems, I turned to Xavier Guell.

“It might be true that visual deficits in the population are less likely to recover when compared to motor deficits in the population. However, the scientific literature seems to indicate that our body has a similar capacity to recover from both motor and visual injuries,” explains Guell. “The reason for this apparent contradiction is that visual lesions are usually not in the cerebral cortex (but instead in other places such as the retina or the lens), while motor lesions in the cerebral cortex are more common. In fact, a large proportion of people who suffer a stroke will have damage in the motor aspects of the cerebral cortex, but no damage in the visual aspects of the cerebral cortex. Crucially, recovery of neurological functions is usually seen when lesions are in the cerebral cortex or in other parts of the cerebrum or cerebellum. In this way, while our body has a similar capacity to recover from both motor and visual injuries, motor injuries are more frequently located in the parts of our body that have a better capacity to regain function (specifically, the cerebral cortex).”

In short, some cells cannot be replaced in either system, but stem cell research provides hope there. That said, there is remarkable plasticity in the brain, so when the lesion is located there, we can see recovery with training.

Do you have a question for The Brain? Ask it here.

Charting the cerebellum

Small and tucked away under the cerebral hemispheres toward the back of the brain, the human cerebellum is still immediately obvious due to its distinct structure. From Galen’s second century anatomical description to Cajal’s systematic analysis of its projections, the cerebellum has long drawn the eyes of researchers studying the brain.  Two parallel studies from MIT’s McGovern institute have recently converged to support an unexpectedly complex level of non-motor cerebellar organization, that would not have been predicted from known motor representation regions.

Historically the cerebellum has primarily been considered to impact motor control and coordination. Think of this view as the cerebellum being the chain on a bicycle, registering what is happening up front in the cortex, and relaying the information so that the back wheel moves at a coordinated pace. This simple view has been questioned as cerebellar circuits have been traced to the basal ganglia and to neocortical regions via the thalamus. This new view suggests the cerebellum is a hub in a complex network, with potentially higher and non-motor functions including cognition and reward-based learning.

A collaboration between the labs of John Gabrieli, Investigator at the McGovern Institute for Brain Research and Jeremy Schmahmann, of the Ataxia Unit at Massachusetts General Hospital and Harvard Medical School, has now used functional brain imaging to give new insight into the cerebellar organization of non-motor roles, including working memory, language, and, social and emotional processing. In a complementary paper, a collaboration between Sheeba Anteraper of MIT’s Martinos Imaging Center and Gagan Joshi of the Alan and Lorraine Bressler Clinical and Research Program at Massachusetts General Hospital, has found changes in connectivity that occur in the cerebellum in autism spectrum disorder (ASD).

A more complex map of the cerebellum

Published in NeuroImage, and featured on the cover, the first study was led by author Xavier Guell, a postdoc in the Gabrieli and Schmahmann labs. The authors used fMRI data from the Human Connectome Project to examine activity in different regions of the cerebellum during specific tasks and at rest. The tasks used extended beyond motor activity to functions recently linked to the cerebellum, including working memory, language, and social and emotional processing. As expected, the authors saw that two regions assigned by other methods to motor activity were clearly modulated during motor tasks.

“Neuroscientists in the 1940s and 1950s described a double representation of motor function in the cerebellum, meaning that two regions in each hemisphere of the cerebellum are engaged in motor control,” explains Guell. “That there are two areas of motor representation in the cerebellum remains one of the most well-established facts of cerebellar macroscale physiology.”

When it came to assigning non-motor tasks, to their surprise, the authors identified three representations that localized to different regions of the cerebellum, pointing to an unexpectedly complex level of organization.

Guell explains the implications further. “Our study supports the intriguing idea that while two parts of the cerebellum are simultaneously engaged in motor tasks, three other parts of the cerebellum are simultaneously engaged in non-motor tasks. Our predecessors coined the term “double motor representation,” and we may now have to add “triple non-motor representation” to the dictionary of cerebellar neuroscience.”

A serendipitous discussion

What happened next, over a discussion of data between Xavier Guell and Sheeba Arnold Anteraper of the McGovern Institute for Brain Research that culminated in a paper led by Anteraper, illustrates how independent strands can meet and reinforce to give a fuller scientific picture.

The findings by Guell and colleagues made the cover of NeuroImage.
The findings by Guell and colleagues made the cover of NeuroImage.

Anteraper and colleagues examined brain images from high-functioning ASD patients, and looked for statistically-significant patterns, letting the data speak rather than focusing on specific ‘candidate’ regions of the brain. To her surprise, networks related to language were highlighted, as well as the cerebellum, regions that had not been linked to ASD, and that seemed at first sight not to be relevant. Scientists interested in language processing, immediately pointed her to Guell.

“When I went to meet him,” says Anteraper, “I saw immediately that he had the same research paper that I’d been reading on his desk. As soon as I showed him my results, the data fell into place and made sense.”

After talking with Guell, they realized that the same non-motor cerebellar representations he had seen, were independently being highlighted by the ASD study.

“When we study brain function in neurological or psychiatric diseases we sometimes have a very clear notion of what parts of the brain we should study” explained Guell, ”We instead asked which parts of the brain have the most abnormal patterns of functional connectivity to other brain areas? This analysis gave us a simple, powerful result. Only the cerebellum survived our strict statistical thresholds.”

The authors found decreased connectivity within the cerebellum in the ASD group, but also decreased strength in connectivity between the cerebellum and the social, emotional and language processing regions in the cerebral cortex.

“Our analysis showed that regions of disrupted functional connectivity mapped to each of the three areas of non-motor representation in the cerebellum. It thus seems that the notion of two motor and three non-motor areas of representation in the cerebellum is not only important for understanding how the cerebellum works, but also important for understanding how the cerebellum becomes dysfunctional in neurology and psychiatry.”

Guell says that many questions remain to be answered. Are these abnormalities in the cerebellum reproducible in other datasets of patients diagnosed with ASD? Why is cerebellar function (and dysfunction) organized in a pattern of multiple representations? What is different between each of these representations, and what is their distinct contribution to diseases such as ASD? Future work is now aimed at unraveling these questions.

The Learning Brain

“There’s a slogan in education,” says McGovern Investigator John Gabrieli. “The first three years are learning to read, and after that you read to learn.”

For John Gabrieli, learning to read represents one of the most important milestones in a child’s life. Except, that is, when a child can’t. Children who cannot learn to read adequately by the first grade have a 90 percent chance of still reading poorly in the fourth grade, and 75 percent odds of struggling in high school. For the estimated 10 percent of schoolchildren with a reading disability, that struggle often comes with a host of other social and emotional challenges: anxiety, damaged self-esteem, increased risk for poverty and eventually, encounters with the criminal justice system.

Most reading interventions focus on classical dyslexia, which is essentially a coding problem—trouble moving letters into sound patterns in the brain. But other factors, such as inadequate vocabulary and lack of practice opportunities, hinder reading too. The diagnosis can be subjective, and for those who are diagnosed, the standard treatments help only some students. “Every teacher knows half to two-thirds have a good response, the other third don’t,” Gabrieli says. “It’s a mystery. And amazingly there’s been almost no progress on that.”

For the last two decades, Gabrieli has sought to unravel the neuroscience behind learning and reading disabilities and, ultimately, convert that understanding into new and better education
interventions—a sort of translational medicine for the classroom.

The Home Effect

In 2011, when Julia Leonard was a research assistant in Gabrieli’s lab, she planned to go into pediatrics. But she became drawn to the lab’s education projects and decided to join the lab as
a graduate student to learn more. By 2015, she helped coauthor a landmark study with postdoc Allyson Mackey, that sought neural markers for the academic “achievement gap,” which separates higher socioeconomic status (SES) children from their disadvantaged peers. It was the first study to make a connection between SES-linked differences in brain structure and educational markers. Specifically, they found children from wealthier backgrounds had thicker cortical brain regions, which correlated with better academic achievement.

“Being a doctor is a really awesome and powerful career,” she says. “But I was more curious about the research that could cause bigger changes in children’s lives.”

Leonard collaborated with Rachel Romeo, another graduate student in the Gabrieli lab who wanted to understand the powerful effect of SES on the developing brain. Romeo had a distinctive background in speech pathology and literacy, where she’d observed wealthier students progressing more quickly compared to their disadvantaged peers.

Their research is revealing a fascinating picture. In a 2017 study, Romeo compared how reading-disabled children from low and high SES backgrounds fared after an intensive summer reading intervention. Low SES children in the intervention improved most in their reading, and MRI scans revealed their brains also underwent greater structural changes in response to the intervention. Higher SES children did not appear to change much, either in skill or brain structure.

“In the few studies that have looked at SES effects on treatment outcomes,” Romeo says, “the research suggests that higher SES kids would show the most improvement. We were surprised to
find that this wasn’t true.” She suspects that the midsummer timing of the intervention may account for this. Lower SES kids’ performance often suffer most during a “summer slump,”
and would therefore have the greatest potential to improve from interventions at this time.

However, in another study this year, Leonard uncovered unique brain differences in lower-SES children. Only among lower-SES children was better reasoning ability associated with thicker
cortex in a key part of the brain. Same behavior, different neural signatures.

“So this becomes a really interesting basic science question,” Leonard says. “Does the brain support cognition the same way across everyone, or does it differ based on how you grow up?”

Not a One-Size-Fits-All

Critics of such “educational neuroscience” have highlighted the lack of useful interventions produced by this research. Gabrieli agrees that so far, little has emerged. “The painful thing is the slowness of this work. It’s mind-boggling,” Gabrieli admits. Every intervention requires all the usual human research requirements, plus coordinating with schools, parents, teachers, and so on. “It’s a huge process to do even the smallest intervention,” he explains. Partly because of that, the field is still relatively new.

But he disagrees with the idea that nothing will come from this research. Gabrieli’s lab previously identified neural markers in children who will go on to develop reading disabilities. These markers could even predict who would or would not respond to standard treatments that focus on phonetic letter-sound coding.

Romeo and Leonard’s work suggests that varied etiologies underlie reading disabilities, which may be the key. “For so long people have thought that reading disorders were just a unitary construct: kids are bad at reading, so let’s fix that with a one-size-fits-all treatment,” Romeo says.

Such findings may ultimately help resource-strapped schools target existing phonetic training rather than enrolling all struggling readers in the same program, to see some still fail.

Think Spaces

At the Oliver Hazard Perry School, a public K-8 school located on the South Boston waterfront, teachers like Colleen Labbe have begun to independently navigate similar problems as they try
to reach their own struggling students.

“A lot of times we look at assessments and put students in intervention groups like phonics,” Labbe says. “But it’s important to also ask what is happening for these students on their way to school and at home.”

For Labbe and Perry Principal Geoffrey Rose, brain science has proven transformative. They’ve embraced literature on neuroplasticity—the idea that brains can change if teachers find the right combination of intervention and circumstances, like the low-SES students who benefited in Romeo and Leonard’s study.

“A big myth is that the brain can’t grow and change, and if you can’t reach that student, you pass them off,” Labbe says.

The science has also been empowering to her students, validating their own powers of self-change. “I tell the kids, we’re going to build the goop!” she says, referring to the brain’s ability to make new connections.

“All kids can learn,” Rose agrees. “But the flip of that is, can all kids do school?” His job, he says, is to make sure they can.

The classrooms at Perry are a mix of students from different cultures and socioeconomic backgrounds, so he and Labbe have focused on helping teachers find ways to connect with these children and help them manage their stresses and thus be ready to learn. Teachers here are armed with “scaffolds”—digestible neuro- and cognitive science aids culled from Rose’s postdoctoral studies at Boston College’s Professional School Administrator Program for school leaders. These encourage teachers to be more aware of cultural differences and tendencies in themselves and their students, to better connect.

There are also “Think Spaces” tucked into classroom corners. “Take a deep breath and be calm,” read posters at these soothing stations, which are equipped with de-stressing tools, like squeezable balls, play-dough, and meditation-inspiring sparkle wands. It sounds trivial, yet studies have shown that poverty-linked stressors like food and home insecurity take a toll on emotion and memory-linked brain areas like the amygdala and hippocampus.

In fact, a new study by Clemens Bauer, a postdoc in Gabrieli’s lab, argues that mindfulness training can help calm amygdala hyperactivity, help lower self-perceived stress, and boost attention. His study was conducted with children enrolled in a Boston charter school.

Taking these combined approaches, Labbe says, she’s seen one of her students rise from struggling at the lowest levels of instruction, to thriving by year end. Labbe’s focus on understanding the girl’s stressors, her family environment, and what social and emotional support she really needed was key. “Now she knows she can do it,” Labbe says.

Rose and Labbe only wish they could better bridge the gap between educators like themselves and brain scientists like Gabrieli. To help forge these connections, Rose recently visited Gabrieli’s lab and looks forward to future collaborations. Brain research will provide critical insights into teaching strategy, he says, but the gap is still wide.

From Lab to Classroom

“I’m hugely impressed by principals and teachers who are passionately interested in understanding the brain,” Gabrieli says. Fortunately, new efforts are bridging educators and scientists.

This March, Gabrieli and the MIT Integrated Learning Initiative—MITili, which he also directs—announced a $30 million-dollar grant from the Chan Zuckerberg Initiative for a collaboration
between MIT, the Harvard Graduate School of Education, and Florida State University.

The grant aims to translate some of Gabrieli’s work into more classrooms. Specifically, he hopes to produce better diagnostics that can identify children at risk for dyslexia and other learning
disabilities before they even learn to read.

He hopes to also provide rudimentary diagnostics that identify the source of struggle, be it classic dyslexia, lack of home support, stress, or maybe a combination of factors. That in turn,
could guide treatment—standard phonetic care for some children, versus alternatives: social support akin to Labbe’s efforts, reading practice, or maybe just vocabulary-boosting conversation time with adults.

“We want to get every kid to be an adequate reader by the end of the third grade,” Gabrieli says. “That’s the ultimate goal for me: to help all children become learners.”

How music lessons can improve language skills

Many studies have shown that musical training can enhance language skills. However, it was unknown whether music lessons improve general cognitive ability, leading to better language proficiency, or if the effect of music is more specific to language processing.

A new study from MIT has found that piano lessons have a very specific effect on kindergartners’ ability to distinguish different pitches, which translates into an improvement in discriminating between spoken words. However, the piano lessons did not appear to confer any benefit for overall cognitive ability, as measured by IQ, attention span, and working memory.

“The children didn’t differ in the more broad cognitive measures, but they did show some improvements in word discrimination, particularly for consonants. The piano group showed the best improvement there,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and the senior author of the paper.

The study, performed in Beijing, suggests that musical training is at least as beneficial in improving language skills, and possibly more beneficial, than offering children extra reading lessons. The school where the study was performed has continued to offer piano lessons to students, and the researchers hope their findings could encourage other schools to keep or enhance their music offerings.

Yun Nan, an associate professor at Beijing Normal University, is the lead author of the study, which appears in the Proceedings of the National Academy of Sciences the week of June 25.

Other authors include Li Liu, Hua Shu, and Qi Dong, all of Beijing Normal University; Eveline Geiser, a former MIT research scientist; Chen-Chen Gong, an MIT research associate; and John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Benefits of music

Previous studies have shown that on average, musicians perform better than nonmusicians on tasks such as reading comprehension, distinguishing speech from background noise, and rapid auditory processing. However, most of these studies have been done by asking people about their past musical training. The MIT researchers wanted to perform a more controlled study in which they could randomly assign children to receive music lessons or not, and then measure the effects.

They decided to perform the study at a school in Beijing, along with researchers from the IDG/McGovern Institute at Beijing Normal University, in part because education officials there were interested in studying the value of music education versus additional reading instruction.

“If children who received music training did as well or better than children who received additional academic instruction, that could a justification for why schools might want to continue to fund music,” Desimone says.

The 74 children participating in the study were divided into three groups: one that received 45-minute piano lessons three times a week; one that received extra reading instruction for the same period of time; and one that received neither intervention. All children were 4 or 5 years old and spoke Mandarin as their native language.

After six months, the researchers tested the children on their ability to discriminate words based on differences in vowels, consonants, or tone (many Mandarin words differ only in tone). Better word discrimination usually corresponds with better phonological awareness — the awareness of the sound structure of words, which is a key component of learning to read.

Children who had piano lessons showed a significant advantage over children in the extra reading group in discriminating between words that differ by one consonant. Children in both the piano group and extra reading group performed better than children who received neither intervention when it came to discriminating words based on vowel differences.

The researchers also used electroencephalography (EEG) to measure brain activity and found that children in the piano group had stronger responses than the other children when they listened to a series of tones of different pitch. This suggest that a greater sensitivity to pitch differences is what helped the children who took piano lessons to better distinguish different words, Desimone says.

“That’s a big thing for kids in learning language: being able to hear the differences between words,” he says. “They really did benefit from that.”

In tests of IQ, attention, and working memory, the researchers did not find any significant differences among the three groups of children, suggesting that the piano lessons did not confer any improvement on overall cognitive function.

Aniruddh Patel, a professor of psychology at Tufts University, says the findings also address the important question of whether purely instrumental musical training can enhance speech processing.

“This study answers the question in the affirmative, with an elegant design that directly compares the effect of music and language instruction on young children. The work specifically relates behavioral improvements in speech perception to the neural impact of musical training, which has both theoretical and real-world significance,” says Patel, who was not involved in the research.

Educational payoff

Desimone says he hopes the findings will help to convince education officials who are considering abandoning music classes in schools not to do so.

“There are positive benefits to piano education in young kids, and it looks like for recognizing differences between sounds including speech sounds, it’s better than extra reading. That means schools could invest in music and there will be generalization to speech sounds,” Desimone says. “It’s not worse than giving extra reading to the kids, which is probably what many schools are tempted to do — get rid of the arts education and just have more reading.”

Desimone now hopes to delve further into the neurological changes caused by music training. One way to do that is to perform EEG tests before and after a single intense music lesson to see how the brain’s activity has been altered.

The research was funded by the National Natural Science Foundation of China, the Beijing Municipal Science and Technology Commission, the Interdiscipline Research Funds of Beijing Normal University, and the Fundamental Research Funds for the Central Universities.

Yanny or Laurel?

“Yanny” or “Laurel?” Discussion around this auditory version of “The Dress” has divided the internet this week.

In this video, brain and cognitive science PhD students Dana Boebinger and Kevin Sitek, both members of the McGovern Institute, unpack the science — and settle the debate. The upshot? Our brain is faced with a myriad of sensory cues that it must process and make sense of simultaneously. Hearing is no exception, and two brains can sometimes “translate” soundwaves in very different ways.

The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Back-and-forth exchanges boost children’s brain response to language

A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.

MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.

The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.

“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of Psychological Science.

Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.

“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Beyond the word gap

Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.

As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.

The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.

The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.

These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.

This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”

The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.

While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.

“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.

Taking action

The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.

“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says.

Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.

“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”

The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.

The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.

Socioeconomic background linked to reading improvement

About 20 percent of children in the United States have difficulty learning to read, and educators have devised a variety of interventions to try to help them. Not every program helps every student, however, in part because the origins of their struggles are not identical.

MIT neuroscientist John Gabrieli is trying to identify factors that may help to predict individual children’s responses to different types of reading interventions. As part of that effort, he recently found that children from lower-income families responded much better to a summer reading program than children from a higher socioeconomic background.

Using magnetic resonance imaging (MRI), the research team also found anatomical changes in the brains of children whose reading abilities improved — in particular, a thickening of the cortex in parts of the brain known to be involved in reading.

“If you just left these children [with reading difficulties] alone on the developmental path they’re on, they would have terrible troubles reading in school. We’re taking them on a neuroanatomical detour that seems to go with real gains in reading ability,” says Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Rachel Romeo, a graduate student in the Harvard-MIT Program in Health Sciences and Technology, and Joanna Christodoulou, an assistant professor of communication sciences and disorders at the Massachusetts General Hospital Institute of Health Professions, are the lead authors of the paper, which appears in the June 7 issue of the journal Cerebral Cortex.

Predicting improvement

In hopes of identifying factors that influence children’s responses to reading interventions, the MIT team set up two summer schools based on a program known as Lindamood-Bell. The researchers recruited students from a wide income range, although socioeconomic status was not the original focus of their study.

The Lindamood-Bell program focuses on helping students develop the sensory and cognitive processing necessary for reading, such as thinking about words as units of sound, and translating printed letters into word meanings.

Children participating in the study, who ranged from 6 to 9 years old, spent four hours a day, five days a week in the program, for six weeks. Before and after the program, their brains were scanned with MRI and they were given some commonly used tests of reading proficiency.

In tests taken before the program started, children from higher and lower socioeconomic (SES) backgrounds fared equally poorly in most areas, with one exception. Children from higher SES backgrounds had higher vocabulary scores, which has also been seen in studies comparing nondyslexic readers from different SES backgrounds.

“There’s a strong trend in these studies that higher SES families tend to talk more with their kids and also use more complex and diverse language. That tends to be where the vocabulary correlation comes from,” Romeo says.

The researchers also found differences in brain anatomy before the reading program started. Children from higher socioeconomic backgrounds had thicker cortex in a part of the brain known as Broca’s area, which is necessary for language production and comprehension. The researchers also found that these differences could account for the differences in vocabulary levels between the two groups.

Based on a limited number of previous studies, the researchers hypothesized that the reading program would have more of an impact on the students from higher socioeconomic backgrounds. But in fact, they found the opposite. About half of the students improved their scores, while the other half worsened or stayed the same. When analyzing the data for possible explanations, family income level was the one factor that proved significant.

“Socioeconomic status just showed up as the piece that was most predictive of treatment response,” Romeo says.

The same children whose reading scores improved also displayed changes in their brain anatomy. Specifically, the researchers found that they had a thickening of the cortex in a part of the brain known as the temporal occipital region, which comprises a large network of structures involved in reading.

“Mix of causes”

The researchers believe that their results may have been different than previous studies of reading intervention in low SES students because their program was run during the summer, rather than during the school year.

“Summer is when socioeconomic status takes its biggest toll. Low SES kids typically have less academic content in their summer activities compared to high SES, and that results in a slump in their skills,” Romeo says. “This may have been particularly beneficial for them because it may have been out of the realm of their typical summer.”

The researchers also hypothesize that reading difficulties may arise in slightly different ways among children of different SES backgrounds.

“There could be a different mix of causes,” Gabrieli says. “Reading is a complicated skill, so there could be a number of different factors that would make you do better or do worse. It could be that those factors are a little bit different in children with more enriched or less enriched environments.”

The researchers are hoping to identify more precisely the factors related to socioeconomic status, other environmental factors, or genetic components that could predict which types of reading interventions will be successful for individual students.

“In medicine, people call it personalized medicine: this idea that some people will really benefit from one intervention and not so much from another,” Gabrieli says. “We’re interested in understanding the match between the student and the kind of educational support that would be helpful for that particular student.”

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, Lindamood-Bell Learning Processes, and the National Institutes of Health.

Rethinking mental illness treatment

McGovern researchers are finding neural markers that could help improve treatment for psychiatric patients.

Ten years ago, Jim and Pat Poitras committed $20M to the McGovern Institute to establish the Poitras Center for Affective Disorders Research. The Poitras family had been longtime supporters of MIT, and because they had seen mental illness in their own family, they decided to support an ambitious new program at the McGovern Institute, with the goal of understanding the fundamental biological basis of depression, bipolar disorder, schizophrenia and other major psychiatric disorders.

The gift came at an opportune time, as the field was entering a new phase of discovery, with rapid advances in psychiatric genomics and brain imaging, and with the emergence of new technologies for genome editing and for the development of animal models. Over the past ten years, the Poitras Center has supported work in each of these areas, including Feng Zhang’s work on CRISPR-based genome editing, and Guoping Feng’s work on animal models for autism, schizophrenia and other psychiatric disorders.

This reflects a long-term strategy, says Robert Desimone, director of the McGovern Institute who oversees the Poitras Center. “But we must not lose sight of the overall goal, which is to benefit human patients. Insights from animal models and genomic medicine have the potential to transform the treatments of the future, but we are also interested in the nearer term, and in what we can do right now.”

One area where technology can have a near-term impact is human brain imaging, and in collaboration with clinical researchers at McLean Hospital, Massachusetts General Hospital and other institutions, the Poitras Center has supported an ambitious program to bring human neuroimaging closer to the clinic.

Discovering psychiatry’s crystal ball

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. The success rates for the first treatments are often less than 50%, and finding what works for an individual patient often means a long and painful process of trial and error.

“Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Satra Ghosh.

McGovern research scientist Susan Whitfield-Gabrieli and her colleagues are hoping to change this picture, with the help of brain imaging. Their findings suggest that brain scans can hold valuable information for psychiatrists and their patients. “We need a paradigm shift in how we use imaging. It can be used for more than research,” says Whitfield-Gabrieli, who is a member of McGovern Investigator John Gabrieli’s lab. “It would be a really big boost to be able use it to personalize psychiatric medicine.”

One of Whitfield-Gabrieli’s goals is to find markers that can predict which treatments will work for which patients. Another is to find markers that can predict the likely risk of disease in the future, allowing doctors to intervene before symptoms first develop. All of these markers need further validation before they are ready for the clinic, but they have the potential to meet a dire need to improve treatment for psychiatric disease.

A brain at rest

For Whitfield-Gabrieli, who both collaborates with and is married to Gabrieli, that paradigm shift began when she started to study the resting brain using functional magnetic resonance imaging (fMRI). Most brain imaging studies require the subject to perform a mental task in the scanner, but these are time-consuming and often hard to replicate in a clinical setting.In contrast, resting state imaging requires no task. The subject simply lies in the scanner and lets the mind wander. The patterns of activity can reveal functional connections within the brain, and are reliably consistent from study to study.

Whitfield-Gabrieli thought resting state scanning had the potential to help patients because it is simple and easy to perform.

“Even a 5-minute scan can contain useful information that could help people,” says Satrajit Ghosh, a principal research scientist in the Gabrieli lab who works closely with Whitfield-Gabrieli.

Whitfield-Gabrieli and her clinical collaborator Larry Seidman at Harvard Medical School decided to study resting state activity in patients with schizophrenia. They found a pattern of activity strikingly different from that of typical brains. The patients showed unusually strong activity in a set of interconnected brain regions known as the default mode network, which is typically activated during introspection. It is normally suppressed when a person attends to the outside world, but schizophrenia patients failed to show this suppression.

“The patient isn’t able to toggle between internal processing and external processing the way a typical individual can,” says Whitfield-Gabrieli, whose work is supported by the Poitras Center for Affective Disorders Research.

Since then, the team has observed similar disturbances in the default network in other disorders, including depression, anxiety, bipolar disorder, and ADHD. “We knew we were onto something interesting,” says Whitfield-Gabrieli. “But we kept coming back to the question: how can brain imaging help patients?”

fMRI on patients

Many imaging studies aim to understand the biological basis of disease and ultimately to guide the development of new drugs or other treatments. But this is a long-term goal, and Whitfield-Gabrieli wanted to find ways that brain imaging could have a more immediate impact. So she and Ghosh decided to use fMRI to look at differences among individual patients, and to focus on differences in how they responded to treatment.

“It gave us something objective to measure,” explains Ghosh. “Someone goes through a treatment, and they either get better or they don’t.” The project also had appeal for Ghosh because it was an opportunity for him to use his expertise in machine learning and other computational tools to build systems-level models of the brain.

For the first study, the team decided to focus on social anxiety disorder (SAD), which is typically treated with either prescription drugs or cognitive behavioral therapy (CBT). Both are moderately effective, but many patients do not respond to the first treatment they try.

The team began with a small study to test whether scans performed before the onset of treatment could predict who would respond best to the treatment. Working with Stefan Hofmann, a clinical psychologist at Boston University, they scanned 38 SAD patients before they began a 12-week course of CBT. At the end of their treatment, the patients were evaluated for clinical improvement, and the researchers examined the scans for patterns of activity that correlated with the improvement. The results were very encouraging; it turned out that predictions based on scan data were 5-fold better than the existing methods based on severity of symptoms at the time of diagnosis.

The researchers then turned to another condition, ADHD, which presents a similar clinical challenge, in that commonly used drugs—such as Adderall or Ritalin—work well, but not for everyone. So the McGovern team began a collaboration with psychiatrist Joseph Biederman, Chief of Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD
at Massachusetts General Hospital, on a similar study, looking for markers of treatment response.

The study is still ongoing, and it will be some time before results emerge, but the researchers are optimistic. “If we could predict who would respond to which treatment and avoid months of trial and error, it would be totally transformative for ADHD,” says Biederman.

Another goal is to predict in advance who is likely to develop a given disease in the future. The researchers have scanned children who have close relatives with schizophrenia or depression, and who are therefore at increased risk of developing these disorders themselves. Surprisingly, the children show patterns of resting state connectivity similar to those of patients.

“I was really intrigued by this,” says Whitfield-Gabrieli. “Even though these children are not sick, they have the same profile as adults who are.”

Whitfield-Gabrieli and Seidman are now expanding their study through a collaboration with clinical researchers at the Shanghai Mental Institute in China, who plan to image and then follow 225 people who are showing early risk signs for schizophrenia. They hope to find markers that predict who will develop the disease and who will not.

“While there are no drugs available to prevent schizophrenia, it may be possible to reduce the risk or severity of the disorder through CBT, or through interventions that reduce stress and improve sleep and well-being,” says Whitfield-Gabrieli. “One likely key to success is early identification of those at highest risk. If we could diagnose early, we could do early interventions
and potentially prevent disorders.”

From association to prediction

The search for predictive markers represents a departure from traditional psychiatric imaging studies, in which a group of patients is compared with a control group of healthy subjects. Studies of this type can reveal average differences between the groups, which may provide clues to the underlying biology of the disease. But they don’t provide information about individual patients, and so they have not been incorporated into clinical practice.

The difference is critical for clinicians, says Biederman. “I treat individuals, not groups. To bring predictive scans to the clinic, we need to be sure the individual scan is informative for the person you are treating.”

To develop these predictions, Whitfield-Gabrieli and Ghosh must first use sophisticated computational methods such as ‘deep learning’ to identify patterns in their data and to build models that relate the patterns to the clinical outcomes. They must then show that these models can generalize beyond the original study population—for example, that predictions based on patients from Boston can be applied to patients from Shanghai. The eventual goal is a model that can analyze a previously unseen brain scan from any individual, and predict with high confidence whether that person will (for example) develop schizophrenia or respond successfully to a particular therapy.

Achieving this will be challenging, because it will require scanning and following large numbers of subjects from diverse demographic groups—thousands of people, not just tens or hundreds
as in most clinical studies. Collaborations with large hospitals, such as the one in Shanghai, can help. Whitfield-Gabrieli has also received funding to collect imaging, clinical, and behavioral
data from over 200 adolescents with depression and anxiety, as part of the National Institutes of Health’s Human Connectome effort. These data, collected in collaboration with clinicians at
McLean Hospital, MGH and Boston University, will be available not only for the Gabrieli team, but for researchers anywhere to analyze. This is important, because no one team or center can
do it alone, says Ghosh. “Data must be collected by many and shared by all.”

The ultimate goal is to study as many patients as possible now so that the tools can help many more later. “Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Ghosh. “We’re still far away from that. But that is what we want to work towards.”