Four McGovern Investigators receive NIH BRAIN Initiative grants

In the human brain, 86 billion neurons form more than 100 trillion connections with other neurons at junctions called synapses. Scientists at the McGovern Institute are working with their collaborators to develop technologies to map these connections across the brain, from mice to humans.

Today, the National Institutes of Health (NIH) announced a new program to support research projects that have the potential to reveal an unprecedented and dynamic picture of the connected networks in the brain. Four of these NIH-funded research projects will take place in McGovern labs.

BRAIN Initiative

In 2013, the Obama administration announced the Brain Research Through Advancing Innovative Neurotechnologies® (BRAIN) Initiative, a public-private research effort to support the development and application of new technologies to understand brain function.

Today, the NIH announced its third project supported by the BRAIN Initiative, called BRAIN Initiative Connectivity Across Scales (BRAIN CONNECTS). The new project complements two previous large-scale projects, which together aim to transform neuroscience research by generating wiring diagrams that can span entire brains across multiple species. These detailed wiring diagrams can help uncover the logic of the brain’s neural code, leading to a better understanding of how this circuitry makes us who we are and how it could be rewired to treat brain diseases.

BRAIN CONNECTS at McGovern

The initial round of BRAIN CONNECTS awards will support researchers at more than 40 university and research institutions across the globe with 11 grants totaling $150 million over five years. Four of these grants have been awarded to McGovern researchers Guoping Feng, Ila Fiete, Satra Ghosh, and Ian Wickersham, whose projects are outlined below:

BRAIN CONNECTS: Comprehensive regional projection map of marmoset with single axon and cell type resolution
Team: Guoping Feng (McGovern Institute, MIT), Partha Mitra (Cold Spring Harbor Laboratory), Xiao Wang (Broad Institute), Ian Wickersham (McGovern Institute, MIT)

Summary: This project will establish an integrated experimental-computational platform to create the first comprehensive brain-wide mesoscale connectivity map in a non-human primate (NHP), the common marmoset (Callithrix jacchus). It will do so by tracing axonal projections of RNA barcode-identified neurons brain-wide in the marmoset, utilizing a sequencing-based imaging method that also permits simultaneous transcriptomic cell typing of the identified neurons. This work will help bridge the gap between brain-wide mesoscale connectivity data available for the mouse from a decade of mapping efforts using modern techniques and the absence of comparable data in humans and NHPs.

BRAIN CONNECTS: A center for high-throughput integrative mouse connectomics
Team: Jeff Lichtman (Harvard University), Ila Fiete (McGovern Institute, MIT), Sebastian Seung (Princeton University), David Tank (Princeton University), Hongkui Zeng (Allen Institute), Viren Jain (Google), Greg Jeffries (Oxford University)

Summary: This project aims to produce a large-scale synapse-level brain map (connectome) that includes all the main areas of the mouse hippocampus. This region is of clinical interest because it is an essential part of the circuit underlying spatial navigation and memory and the earliest impairments and degeneration related to Alzheimer’s disease.

BRAIN CONNECTS: The center for Large-scale Imaging of Neural Circuits (LINC)
Team: Anastasia Yendiki (MGH), Satra Ghosh (McGovern, MIT), Suzanne Haber (University of Rochester), Elizabeth Hillman (Columbia University)

Summary: This project will generate connectional diagrams of the monkey and human brain at unprecedented resolutions. These diagrams will be linked both to the neuroanatomic literature and to in vivo neuroimaging techniques, bridging between the rigor of the former and the clinical relevance of the latter. The data to be generated by this project will advance our understanding of brain circuits that are implicated in motor and psychiatric disorders, and that are targeted by deep-brain stimulation to treat these disorders.

BRAIN CONNECTS: Mapping brain-wide connectivity of neuronal types using barcoded connectomics
Team: Xiaoyin Chen (Allen Institute), Ian Wickersham (McGovern Institute, MIT), and Justus Kebschull of JHU

Summary: This project aims to optimize and develop barcode sequencing-based neuroanatomical techniques to achieve brain-wide, high-throughput, highly multiplexed mapping of axonal projections and synaptic connectivity of neuronal types at cellular resolution in primate brains. The team will work together to apply these techniques to generate an unprecedented multi-resolution map of brain-wide projections and synaptic inputs of neurons in the macaque visual cortex at cellular resolution.

 

Refining mental health diagnoses

Maedbh King came to MIT to make a difference in mental health. As a postdoctoral fellow in the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, she is building computer models aimed at helping clinicians improve diagnosis and treatment, especially for young people with neurodevelopmental and psychiatric disorders.

Tapping two large patient-data sources, King is working to analyze critical biological and behavioral information to better categorize patients’ mental health conditions, including autism spectrum disorder, attention-deficit hyperactivity disorder (ADHD), anxiety, and suicidal thoughts — and to provide more predictive approaches to addressing them. Her strategy reflects the center’s commitment to a holistic understanding of human brain function using theoretical and computa-tional neuroscience.

“Today, treatment decisions for psychiatric disorders are derived entirely from symptoms, which leaves clinicians and patients trying one treatment and, if it doesn’t work, trying another,” says King. “I hope to help change that.”

King grew up in Dublin, Ireland, and studied psychology in college; gained neuroimaging and programming skills while earning a master’s degree from Western University in Canada; and received her doctorate from the University of California, Berkeley, where she built maps and models of the human brain. In fall 2022, King joined the lab of Satrajit Ghosh, a McGovern Institute principal research scientist whose team uses neuroimaging, speech communication, and machine learning to improve assessments and treatments for mental health and neurological disorders.

Big-data insights

King is pursuing several projects using the Healthy Brain Network, a landmark mental health study of children and adolescents in New York City. She and lab colleagues are extracting data from cognitive and other assessments — such as language patterns, favorite school subjects, and family mental illness history — from roughly 4,000 participants to provide more-nuanced understanding of their neurodevelopmental disorders, such as autism or ADHD.

“Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records,” says King.

With this database, one can develop “very rich clinical profiles of these young people,” including their challenges and adaptive strengths, King explains. “We’re interested in placing these participants within a spectrum of symptoms, rather than just providing a binary label of, ‘has this disorder’ or ‘doesn’t have it.’ It’s an effort to subtype based on these phenotypic assessments.”

In other research, King is developing tools to detect risk factors for suicide among adolescents. Working with psychiatrists at Children’s Hospital of Philadelphia, she is using detailed questionnaires from some 20,000 youths who visited the hospital’s emergency department over several years; about one-tenth had tried to take their own lives. The questionnaires collect information about demographics, lifestyle, relationships, and other aspects of patients’ lives.

“One of the big questions the physicians want to answer is, Are there any risk predictors we can identify that can ultimately prevent, or at least mitigate, future suicide attempts?” King says. “Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records.”

King is passionate about producing findings to help practitioners, whether they’re clinicians, teachers, parents, or policy makers, and the populations they’re studying. “This applied work,” she says, “should be communicated in a way that can be useful.

Using machine learning to track the pandemic’s impact on mental health

Dealing with a global pandemic has taken a toll on the mental health of millions of people. A team of MIT and Harvard University researchers has shown that they can measure those effects by analyzing the language that people use to express their anxiety online.

Using machine learning to analyze the text of more than 800,000 Reddit posts, the researchers were able to identify changes in the tone and content of language that people used as the first wave of the Covid-19 pandemic progressed, from January to April of 2020. Their analysis revealed several key changes in conversations about mental health, including an overall increase in discussion about anxiety and suicide.

“We found that there were these natural clusters that emerged related to suicidality and loneliness, and the amount of posts in these clusters more than doubled during the pandemic as compared to the same months of the preceding year, which is a grave concern,” says Daniel Low, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT and the lead author of the study.

The analysis also revealed varying impacts on people who already suffer from different types of mental illness. The findings could help psychiatrists, or potentially moderators of the Reddit forums that were studied, to better identify and help people whose mental health is suffering, the researchers say.

“When the mental health needs of so many in our society are inadequately met, even at baseline, we wanted to bring attention to the ways that many people are suffering during this time, in order to amplify and inform the allocation of resources to support them,” says Laurie Rumker, a graduate student in the Bioinformatics and Integrative Genomics PhD Program at Harvard and one of the authors of the study.

Satrajit Ghosh, a principal research scientist at MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears in the Journal of Internet Medical Research. Other authors of the paper include Tanya Talkar, a graduate student in the Program in Speech and Hearing Bioscience and Technology at Harvard and MIT; John Torous, director of the digital psychiatry division at Beth Israel Deaconess Medical Center; and Guillermo Cecchi, a principal research staff member at the IBM Thomas J. Watson Research Center.

A wave of anxiety

The new study grew out of the MIT class 6.897/HST.956 (Machine Learning for Healthcare), in MIT’s Department of Electrical Engineering and Computer Science. Low, Rumker, and Talkar, who were all taking the course last spring, had done some previous research on using machine learning to detect mental health disorders based on how people speak and what they say. After the Covid-19 pandemic began, they decided to focus their class project on analyzing Reddit forums devoted to different types of mental illness.

“When Covid hit, we were all curious whether it was affecting certain communities more than others,” Low says. “Reddit gives us the opportunity to look at all these subreddits that are specialized support groups. It’s a really unique opportunity to see how these different communities were affected differently as the wave was happening, in real-time.”

The researchers analyzed posts from 15 subreddit groups devoted to a variety of mental illnesses, including schizophrenia, depression, and bipolar disorder. They also included a handful of groups devoted to topics not specifically related to mental health, such as personal finance, fitness, and parenting.

Using several types of natural language processing algorithms, the researchers measured the frequency of words associated with topics such as anxiety, death, isolation, and substance abuse, and grouped posts together based on similarities in the language used. These approaches allowed the researchers to identify similarities between each group’s posts after the onset of the pandemic, as well as distinctive differences between groups.

The researchers found that while people in most of the support groups began posting about Covid-19 in March, the group devoted to health anxiety started much earlier, in January. However, as the pandemic progressed, the other mental health groups began to closely resemble the health anxiety group, in terms of the language that was most often used. At the same time, the group devoted to personal finance showed the most negative semantic change from January to April 2020, and significantly increased the use of words related to economic stress and negative sentiment.

They also discovered that the mental health groups affected the most negatively early in the pandemic were those related to ADHD and eating disorders. The researchers hypothesize that without their usual social support systems in place, due to lockdowns, people suffering from those disorders found it much more difficult to manage their conditions. In those groups, the researchers found posts about hyperfocusing on the news and relapsing back into anorexia-type behaviors since meals were not being monitored by others due to quarantine.

Using another algorithm, the researchers grouped posts into clusters such as loneliness or substance use, and then tracked how those groups changed as the pandemic progressed. Posts related to suicide more than doubled from pre-pandemic levels, and the groups that became significantly associated with the suicidality cluster during the pandemic were the support groups for borderline personality disorder and post-traumatic stress disorder.

The researchers also found the introduction of new topics specifically seeking mental health help or social interaction. “The topics within these subreddit support groups were shifting a bit, as people were trying to adapt to a new life and focus on how they can go about getting more help if needed,” Talkar says.

While the authors emphasize that they cannot implicate the pandemic as the sole cause of the observed linguistic changes, they note that there was much more significant change during the period from January to April in 2020 than in the same months in 2019 and 2018, indicating the changes cannot be explained by normal annual trends.

Mental health resources

This type of analysis could help mental health care providers identify segments of the population that are most vulnerable to declines in mental health caused by not only the Covid-19 pandemic but other mental health stressors such as controversial elections or natural disasters, the researchers say.

Additionally, if applied to Reddit or other social media posts in real-time, this analysis could be used to offer users additional resources, such as guidance to a different support group, information on how to find mental health treatment, or the number for a suicide hotline.

“Reddit is a very valuable source of support for a lot of people who are suffering from mental health challenges, many of whom may not have formal access to other kinds of mental health support, so there are implications of this work for ways that support within Reddit could be provided,” Rumker says.

The researchers now plan to apply this approach to study whether posts on Reddit and other social media sites can be used to detect mental health disorders. One current project involves screening posts in a social media site for veterans for suicide risk and post-traumatic stress disorder.

The research was funded by the National Institutes of Health and the McGovern Institute.

Signs of COVID19 may be hidden in speech signals

It’s often easy to tell when colleagues are struggling with a cold — they sound sick. Maybe their voices are lower or have a nasally tone. Infections change the quality of our voices in various ways. But MIT Lincoln Laboratory researchers are detecting these changes in Covid-19 patients even when these changes are too subtle for people to hear or even notice in themselves.

By processing speech recordings of people infected with Covid-19 but not yet showing symptoms, these researchers found evidence of vocal biomarkers, or measurable indicators, of the disease. These biomarkers stem from disruptions the infection causes in the movement of muscles across the respiratory, laryngeal, and articulatory systems. A technology letter describing this research was recently published in IEEE Open Journal of Engineering in Medicine and Biology.

While this research is still in its early stages, the initial findings lay a framework for studying these vocal changes in greater detail. This work may also hold promise for using mobile apps to screen people for the disease, particularly those who are asymptomatic.

Talking heads

“I had this ‘aha’ moment while I was watching the news,” says Thomas Quatieri, a senior staff member in the laboratory’s Human Health and Performance Systems Group. Quatieri has been leading the group’s research in vocal biomarkers for the past decade; their focus has been on discovering vocal biomarkers of neurological disorders such as amyotrophic lateral sclerosis (ALS) and Parkinson’s disease. These diseases, and many others, change the brain’s ability to turn thoughts into words, and those changes can be detected by processing speech signals.

He and his team wondered whether vocal biomarkers might also exist for COVID19. The symptoms led them to think so. When symptoms manifest, a person typically has difficulty breathing. Inflammation in the respiratory system affects the intensity with which air is exhaled when a person talks. This air interacts with hundreds of other potentially inflamed muscles on its journey to speech production. These interactions impact the loudness, pitch, steadiness, and resonance of the voice — measurable qualities that form the basis of their biomarkers.

While watching the news, Quatieri realized there were speech samples in front of him of people who had tested positive for COVID19. He and his colleagues combed YouTube for clips of celebrities or TV hosts who had given interviews while they were COVID19 positive but asymptomatic. They identified five subjects. Then, they downloaded interviews of those people from before they had COVID19, matching audio conditions as best they could.

They then used algorithms to extract features from the vocal signals in each audio sample. “These vocal features serve as proxies for the underlying movements of the speech production systems,” says Tanya Talkar, a PhD candidate in the Speech and Hearing Bioscience and Technology program at Harvard University.

The signal’s amplitude, or loudness, was extracted as a proxy for movement in the respiratory system. For studying movements in the larynx, they measured pitch and the steadiness of pitch, two indicators of how stable the vocal cords are. As a proxy for articulator movements — like those of the tongue, lips, jaw, and more — they extracted speech formants. Speech formants are frequency measurements that correspond to how the mouth shapes sound waves to create a sequence of phonemes (vowels and consonants) and to contribute to a certain vocal quality (nasally versus warm, for example).

They hypothesized that Covid19 inflammation causes muscles across these systems to become overly coupled, resulting in a less complex movement. “Picture these speech subsystems as if they are the wrist and fingers of a skilled pianist; normally, the movements are independent and highly complex,” Quatieri says. Now, picture if the wrist and finger movements were to become stuck together, moving as one. This coupling would force the pianist to play a much simpler tune.

The researchers looked for evidence of coupling in their features, measuring how each feature changed in relation to another in 10 millisecond increments as the subject spoke. These values were then plotted on an eigenspectrum; the shape of this eigenspectrum plot indicates the complexity of the signals. “If the eigenspace of the values forms a sphere, the signals are complex. If there is less complexity, it might look more like a flat oval,” Talkar says.

In the end, they found a decreased complexity of movement in the Covid-19 interviews as compared to the pre-Covid-19 interviews. “The coupling was less prominent between larynx and articulator motion, but we’re seeing a reduction in complexity between respiratory and larynx motion,” Talkar says.

Early detections

These preliminary results hint that biomarkers derived from vocal system coordination can indicate the presence of Covid-19. However, the researchers note that it’s still early to draw conclusions, and more data are needed to validate their findings. They’re working now with a publicly released dataset from Carnegie Mellon University that contains audio samples from individuals who have tested positive for COVID19.

Beyond collecting more data to fuel this research, the team is looking at using mobile apps to implement it. A partnership is underway with Satra Ghosh at the MIT McGovern Institute for Brain Research to integrate vocal screening for Covid-19 into its VoiceUp app, which was initially developed to study the link between voice and depression. A follow-on effort could add this vocal screening into the How We Feel app. This app asks users questions about their daily health status and demographics, with the aim to use these data to pinpoint hotspots and predict the percentage of people who have the disease in different regions of the country. Asking users to also submit a daily voice memo to screen for biomarkers of Covid-19 could potentially help scientists catch on to an outbreak.

“A sensing system integrated into a mobile app could pick up on infections early, before people feel sick or, especially, for these subsets of people who don’t ever feel sick or show symptoms,” says Jeffrey Palmer, who leads the research group. “This is also something the U.S. Army is interested in as part of a holistic Covid-19 monitoring system.” Even after a diagnosis, this sensing ability could help doctors remotely monitor their patients’ progress or monitor the effects of a vaccine or drug treatment.

As the team continues their research, they plan to do more to address potential confounders that could cause inaccuracies in their results, such as different recording environments, the emotional status of the subjects, or other illnesses causing vocal changes. They’re also supporting similar research. The Mass General Brigham Center for COVID Innovation has connected them to international scientists who are following the team’s framework to analyze coughs.

“There are a lot of other interesting areas to look at. Here, we looked at the physiological impacts on the vocal tract. We’re also looking to expand our biomarkers to consider neurophysiological impacts linked to Covid-19, like the loss of taste and smell,” Quatieri says. “Those symptoms can affect speaking, too.”

Satrajit Ghosh

Personalized Medicine

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. Satrajit Ghosh hopes to change this picture, and his research suggests that individual brain scans and speaking patterns can hold valuable information for guiding psychiatrists and patients. His research group develops novel analytic platforms that use such information to create robust, predictive models around human health. Current areas include depression, suicide, anxiety disorders, autism, Parkinson’s disease, and brain tumors.

Why do I talk with my hands?

This is a very interesting question sent to us by Gabriel Castellanos (thank you!) Many of us gesture with our hands when we speak (and even when we do not) as a form of non-verbal communication. How hand gestures are coordinated with speech remains unclear. In part, it is difficult to monitor natural hand gestures in fMRI-based brain imaging studies as you have to stay still.

“Performing hand movements when stuck in the bore of a scanner is really tough beyond simple signing and keypresses,” explains McGovern Principal Research Scientist Satrajit Ghosh. “Thus ecological experiments of co-speech with motor gestures have not been carried out in the context of a magnetic resonance scanner, and therefore little is known about language and motor integration within this context.”

There have been studies that use proxies such as co-verbal pushing of buttons, and also studies using other imaging techniques, such as electroencephalography (EEG) and magnetoencephalography (MEG), to monitor brain activity during gesturing, but it would be difficult to precisely spatially localize the regions involved in natural co-speech hand gesticulation using such approaches. Another possible avenue for addressing this question would be to look at patients with conditions that might implicate particular brain regions in coordinating hand gestures, but such approaches have not really pinpointed a pathway for coordinating speech and hand movements.

That said, co-speech hand gesturing plays an important role in communication. “More generally co-speech hand gestures are seen as a mechanism for emphasis and disambiguation of the semantics of a sentence, in addition to prosody and facial queues,” says Ghosh. “In fact, one may consider the act of speaking as one large orchestral score involving vocal tract movement, respiration, voicing, facial expression, hand gestures, and even whole body postures acting as different instruments coordinated dynamically by the brain. Based on our current understanding of language production, co-speech or gestural events would likely be planned at a higher level than articulation and therefore would likely activate inferior frontal gyrus, SMA, and others.”

How this orchestra is coordinated and conducted thus remains to be unraveled, but certainly the question is one that gets to the heart of human social interactions.

Do you have a question for The Brain? Ask it here.

Rethinking mental illness treatment

McGovern researchers are finding neural markers that could help improve treatment for psychiatric patients.

Ten years ago, Jim and Pat Poitras committed $20M to the McGovern Institute to establish the Poitras Center for Affective Disorders Research. The Poitras family had been longtime supporters of MIT, and because they had seen mental illness in their own family, they decided to support an ambitious new program at the McGovern Institute, with the goal of understanding the fundamental biological basis of depression, bipolar disorder, schizophrenia and other major psychiatric disorders.

The gift came at an opportune time, as the field was entering a new phase of discovery, with rapid advances in psychiatric genomics and brain imaging, and with the emergence of new technologies for genome editing and for the development of animal models. Over the past ten years, the Poitras Center has supported work in each of these areas, including Feng Zhang’s work on CRISPR-based genome editing, and Guoping Feng’s work on animal models for autism, schizophrenia and other psychiatric disorders.

This reflects a long-term strategy, says Robert Desimone, director of the McGovern Institute who oversees the Poitras Center. “But we must not lose sight of the overall goal, which is to benefit human patients. Insights from animal models and genomic medicine have the potential to transform the treatments of the future, but we are also interested in the nearer term, and in what we can do right now.”

One area where technology can have a near-term impact is human brain imaging, and in collaboration with clinical researchers at McLean Hospital, Massachusetts General Hospital and other institutions, the Poitras Center has supported an ambitious program to bring human neuroimaging closer to the clinic.

Discovering psychiatry’s crystal ball

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. The success rates for the first treatments are often less than 50%, and finding what works for an individual patient often means a long and painful process of trial and error.

“Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Satra Ghosh.

McGovern research scientist Susan Whitfield-Gabrieli and her colleagues are hoping to change this picture, with the help of brain imaging. Their findings suggest that brain scans can hold valuable information for psychiatrists and their patients. “We need a paradigm shift in how we use imaging. It can be used for more than research,” says Whitfield-Gabrieli, who is a member of McGovern Investigator John Gabrieli’s lab. “It would be a really big boost to be able use it to personalize psychiatric medicine.”

One of Whitfield-Gabrieli’s goals is to find markers that can predict which treatments will work for which patients. Another is to find markers that can predict the likely risk of disease in the future, allowing doctors to intervene before symptoms first develop. All of these markers need further validation before they are ready for the clinic, but they have the potential to meet a dire need to improve treatment for psychiatric disease.

A brain at rest

For Whitfield-Gabrieli, who both collaborates with and is married to Gabrieli, that paradigm shift began when she started to study the resting brain using functional magnetic resonance imaging (fMRI). Most brain imaging studies require the subject to perform a mental task in the scanner, but these are time-consuming and often hard to replicate in a clinical setting.In contrast, resting state imaging requires no task. The subject simply lies in the scanner and lets the mind wander. The patterns of activity can reveal functional connections within the brain, and are reliably consistent from study to study.

Whitfield-Gabrieli thought resting state scanning had the potential to help patients because it is simple and easy to perform.

“Even a 5-minute scan can contain useful information that could help people,” says Satrajit Ghosh, a principal research scientist in the Gabrieli lab who works closely with Whitfield-Gabrieli.

Whitfield-Gabrieli and her clinical collaborator Larry Seidman at Harvard Medical School decided to study resting state activity in patients with schizophrenia. They found a pattern of activity strikingly different from that of typical brains. The patients showed unusually strong activity in a set of interconnected brain regions known as the default mode network, which is typically activated during introspection. It is normally suppressed when a person attends to the outside world, but schizophrenia patients failed to show this suppression.

“The patient isn’t able to toggle between internal processing and external processing the way a typical individual can,” says Whitfield-Gabrieli, whose work is supported by the Poitras Center for Affective Disorders Research.

Since then, the team has observed similar disturbances in the default network in other disorders, including depression, anxiety, bipolar disorder, and ADHD. “We knew we were onto something interesting,” says Whitfield-Gabrieli. “But we kept coming back to the question: how can brain imaging help patients?”

fMRI on patients

Many imaging studies aim to understand the biological basis of disease and ultimately to guide the development of new drugs or other treatments. But this is a long-term goal, and Whitfield-Gabrieli wanted to find ways that brain imaging could have a more immediate impact. So she and Ghosh decided to use fMRI to look at differences among individual patients, and to focus on differences in how they responded to treatment.

“It gave us something objective to measure,” explains Ghosh. “Someone goes through a treatment, and they either get better or they don’t.” The project also had appeal for Ghosh because it was an opportunity for him to use his expertise in machine learning and other computational tools to build systems-level models of the brain.

For the first study, the team decided to focus on social anxiety disorder (SAD), which is typically treated with either prescription drugs or cognitive behavioral therapy (CBT). Both are moderately effective, but many patients do not respond to the first treatment they try.

The team began with a small study to test whether scans performed before the onset of treatment could predict who would respond best to the treatment. Working with Stefan Hofmann, a clinical psychologist at Boston University, they scanned 38 SAD patients before they began a 12-week course of CBT. At the end of their treatment, the patients were evaluated for clinical improvement, and the researchers examined the scans for patterns of activity that correlated with the improvement. The results were very encouraging; it turned out that predictions based on scan data were 5-fold better than the existing methods based on severity of symptoms at the time of diagnosis.

The researchers then turned to another condition, ADHD, which presents a similar clinical challenge, in that commonly used drugs—such as Adderall or Ritalin—work well, but not for everyone. So the McGovern team began a collaboration with psychiatrist Joseph Biederman, Chief of Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD
at Massachusetts General Hospital, on a similar study, looking for markers of treatment response.

The study is still ongoing, and it will be some time before results emerge, but the researchers are optimistic. “If we could predict who would respond to which treatment and avoid months of trial and error, it would be totally transformative for ADHD,” says Biederman.

Another goal is to predict in advance who is likely to develop a given disease in the future. The researchers have scanned children who have close relatives with schizophrenia or depression, and who are therefore at increased risk of developing these disorders themselves. Surprisingly, the children show patterns of resting state connectivity similar to those of patients.

“I was really intrigued by this,” says Whitfield-Gabrieli. “Even though these children are not sick, they have the same profile as adults who are.”

Whitfield-Gabrieli and Seidman are now expanding their study through a collaboration with clinical researchers at the Shanghai Mental Institute in China, who plan to image and then follow 225 people who are showing early risk signs for schizophrenia. They hope to find markers that predict who will develop the disease and who will not.

“While there are no drugs available to prevent schizophrenia, it may be possible to reduce the risk or severity of the disorder through CBT, or through interventions that reduce stress and improve sleep and well-being,” says Whitfield-Gabrieli. “One likely key to success is early identification of those at highest risk. If we could diagnose early, we could do early interventions
and potentially prevent disorders.”

From association to prediction

The search for predictive markers represents a departure from traditional psychiatric imaging studies, in which a group of patients is compared with a control group of healthy subjects. Studies of this type can reveal average differences between the groups, which may provide clues to the underlying biology of the disease. But they don’t provide information about individual patients, and so they have not been incorporated into clinical practice.

The difference is critical for clinicians, says Biederman. “I treat individuals, not groups. To bring predictive scans to the clinic, we need to be sure the individual scan is informative for the person you are treating.”

To develop these predictions, Whitfield-Gabrieli and Ghosh must first use sophisticated computational methods such as ‘deep learning’ to identify patterns in their data and to build models that relate the patterns to the clinical outcomes. They must then show that these models can generalize beyond the original study population—for example, that predictions based on patients from Boston can be applied to patients from Shanghai. The eventual goal is a model that can analyze a previously unseen brain scan from any individual, and predict with high confidence whether that person will (for example) develop schizophrenia or respond successfully to a particular therapy.

Achieving this will be challenging, because it will require scanning and following large numbers of subjects from diverse demographic groups—thousands of people, not just tens or hundreds
as in most clinical studies. Collaborations with large hospitals, such as the one in Shanghai, can help. Whitfield-Gabrieli has also received funding to collect imaging, clinical, and behavioral
data from over 200 adolescents with depression and anxiety, as part of the National Institutes of Health’s Human Connectome effort. These data, collected in collaboration with clinicians at
McLean Hospital, MGH and Boston University, will be available not only for the Gabrieli team, but for researchers anywhere to analyze. This is important, because no one team or center can
do it alone, says Ghosh. “Data must be collected by many and shared by all.”

The ultimate goal is to study as many patients as possible now so that the tools can help many more later. “Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Ghosh. “We’re still far away from that. But that is what we want to work towards.”

Predicting how patients respond to therapy

Social anxiety is usually treated with either cognitive behavioral therapy or medications. However, it is currently impossible to predict which treatment will work best for a particular patient. The team of researchers from MIT, Boston University (BU) and Massachusetts General Hospital (MGH) found that the effectiveness of therapy could be predicted by measuring patients’ brain activity as they looked at photos of faces, before the therapy sessions began.

The findings, published this week in the Archives of General Psychiatry, may help doctors more accurately choose treatments for social anxiety disorder, which is estimated to affect around 15 million people in the United States.

“Our vision is that some of these measures might direct individuals to treatments that are more likely to work for them,” says John Gabrieli, the Grover M. Hermann Professor of Brain and Cognitive Sciences at MIT, a member of the McGovern Institute for Brain Research and senior author of the paper.

Lead authors of the paper are MIT postdoc Oliver Doehrmann and Satrajit Ghosh, a research scientist in the McGovern Institute.

Choosing treatments

Sufferers of social anxiety disorder experience intense fear in social situations, interfering with their ability to function in daily life. Cognitive behavioral therapy aims to change the thought and behavior patterns that lead to anxiety. For social anxiety disorder patients, that might include learning to reverse the belief that others are watching or judging them.

The new paper is part of a larger study that MGH and BU recently ran on cognitive behavioral therapy for social anxiety, led by Mark Pollack, director of the Center for Anxiety and Traumatic Stress Disorders at MGH, and Stefan Hofmann, director of the Social Anxiety Program at BU.

“This was a chance to ask if these brain measures, taken before treatment, would be informative in ways above and beyond what physicians can measure now, and determine who would be responsive to this treatment,” Gabrieli says.

Currently doctors might choose a treatment based on factors such as ease of taking pills versus going to therapy, the possibility of drug side effects, or what the patient’s insurance will cover. “From a science perspective there’s very little evidence about which treatment is optimal for a person,” Gabrieli says.

The researchers used functional magnetic resonance imaging (fMRI) to image the brains of patients before and after treatment. There have been many imaging studies showing brain differences between healthy people and patients with neuropsychiatric disorders, but so far imaging has not been established as a way to predict patient response to particular treatments.

Measuring brain activity

In the new study, the researchers measured differences in brain activity as patients looked at images of angry or neutral faces. After 12 weeks of cognitive behavioral therapy, patients’ social anxiety levels were tested. The researchers found that patients who had shown a greater difference in activity in high-level visual processing areas during the face-response task showed the most improvement after therapy.

The findings are an important step towards improving doctors’ ability to choose the right treatment for psychiatric disorders, says Greg Siegle, associate professor of psychiatry at the University of Pittsburgh. “It’s really critical that somebody do this work, and they did it very well,” says Siegle, who was not part of the research team. “It moves the field forward, and brings psychology into more of a rigorous science, using neuroscience to distinguish between clinical cases that at first appear homogeneous.”

Gabrieli says it’s unclear why activity in brain regions involved with visual processing would be a good predictor of treatment outcome. One possibility is that patients who benefited more were those whose brains were already adept at segregating different types of experiences, Gabrieli says.

The researchers are now planning a follow-up study to investigate whether brain scans can predict differences in response between cognitive behavioral therapy and drug treatment.

“Right now, all by itself, we’re just giving somebody encouraging or discouraging news about the likely outcome” of therapy, Gabrieli says. “The really valuable thing would be if it turns out to be differentially sensitive to different treatment choices.”

The research was funded by the Poitras Center for Affective Disorders Research and the National Institute of Mental Health.