Model reveals why debunking election misinformation often doesn’t work

When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.

Polina Anikeeva named 2024 Blavatnik Award Finalist

The Blavatnik Family Foundation and New York Academy of Sciences has announced the honorees of the 2024 Blavatnik National Awards, and McGovern Investigator Polina Anikeeva is among five finalists in the category of physical sciences and engineering.

Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering at MIT, works at the intersection of materials science, electronics, and neurobiology to improve our understanding of brain-body communication. She is head of MIT’s Materials Science and Engineering Department, and is also a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, and associate director of the Research Laboratory of Electronics. Anikeeva’s lab has developed ultrathin, flexible fibers that probe the flow of information between the brain and peripheral organs in the body. Her ultimate goal is to develop novel technologies to achieve healthy minds in healthy bodies.

The Blavatnik National Awards for Young Scientists is the largest unrestricted scientific prize offered to America’s most promising, faculty-level scientific researchers under 42. The 2024 Blavatnik National Awards received 331 nominations from 172 institutions in 43 US states and selected three women scientists as laureates (Cigall Kadoch, Dana Farber Cancer Institute; Markita del Carpio Landry, UC Berkeley; and Britney Schmidt, Cornell University). An additional 15 finalists, including two from MIT: Anikeeva and Yogesh Surendranath will also receive monetary prizes.

“On behalf of the Blavatnik Family Foundation, I congratulate this year’s outstanding laureates and finalists for their exceptional research. They are among the preeminent leaders of the next generation of scientific innovation and discovery,” said Len Blavatnik, founder of Access Industries and the Blavatnik Family Foundation and a member of the President’s Council of The New York Academy of Sciences.

The Blavatnik National Awards for Young Scientists will celebrate the 2024 laureates and finalists in a gala ceremony on October 1, 2024, at the American Museum of Natural History in New York.

Harnessing the power of placebo for pain relief

Placebos are inert treatments, generally not expected to impact biological pathways or improve a person’s physical health. But time and again, some patients report that they feel better after taking a placebo. Increasingly, doctors and scientists are recognizing that rather than dismissing placebos as mere trickery, they may be able to help patients by harnessing their power.

To maximize the impact of the placebo effect and design reliable therapeutic strategies, researchers need a better understanding of how it works. Now, with a new animal model developed by scientists at the McGovern Institute, they will be able to investigate the neural circuits that underlie placebos’ ability to elicit pain relief.

“The brain and body interaction has a lot of potential, in a way that we don’t fully understand,” says McGovern investigator Fan Wang. “I really think there needs to be more of a push to understand placebo effect, in pain and probably in many other conditions. Now we have a strong model to probe the circuit mechanism.”

Context-dependent placebo effect

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

In the September 5, 2024, issue of the journal Current Biology, Wang and her team report that they have elicited strong placebo pain relief in mice by activating pain-suppressing neurons in the brain while the mice are in a specific environment—thereby teaching the animals that they feel better when they are in that context. Following their training, placing the mice in that environment alone is enough to suppress pain. The team’s experiments, which were funded by the National Institutes of Health, the K. Lisa Yang Brain-Body Center and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics within MIT’s Yang Tan Collective show that this context-dependent placebo effect relieves both acute and chronic pain.

Context is critical for the placebo effect. While a pill can help a patient feel better when they expect it to, even if it is made only of sugar or starch, it seems to be not just the pill that sets up those expectations, but the entire scenario in which the pill is taken. For example, being in a hospital and interacting with doctors can contribute to a patient’s perception of care, and these social and environmental factors can make a placebo effect more probable.

Postdoctoral fellows Bin Chen and Nitsan Goldstein used visual and textural cues to define a specific place. Then they activated pain-suppressing neurons in the brain while the animals were in this “pain-relief box.” Those pain-suppressing neurons, which Wang’s lab discovered a few years ago, are located in an emotion-processing center of the brain called the central amygdala. By expressing light-sensitive channels in these neurons, the researchers were able to suppress pain with light in the pain-relief box and leave the neurons inactive when mice were in a control box.

Animals learned to prefer the pain-relief box to other environments. And when the researchers tested their response to potentially painful stimuli after they had made that association, they found the mice were less sensitive while they were there. “Just by being in the context that they had associated with pain suppression, we saw that reduced pain—even though we weren’t actually activating those [pain-suppressing] neurons,” Goldstein explains.

Acute and chronic pain relief

Some scientists have been able to elicit placebo pain relief in rodents by treating the animals with morphine, linking environmental cues to the pain suppression caused by the drugs similar to the way Wang’s team did by directly activating pain-suppressing neurons. This drug-based approach works best for setting up expectations of relief for acute pain; its placebo effect is short-lived and mostly ineffective against chronic pain. So Wang, Chen, and Goldstein were particularly pleased to find that their engineered placebo effect was effective for relieving both acute and chronic pain.

In their experiments, animals experiencing a chemotherapy-induced hypersensitivity to touch exhibited a preference for the pain relief box as much as animals who were exposed to a chemical that induces acute pain, days after their initial conditioning. Once there, their chemotherapy-induced pain sensitivity was eliminated; they exhibited no more sensitivity to painful stimuli than they had prior to receiving chemotherapy.

One of the biggest surprises came when the researchers turned their attention back to the pain-suppressing neurons in the central amygdala that they had used to trigger pain relief. They suspected that those neurons might be reactivated when mice returned to the pain-relief box. Instead, they found that after the initial conditioning period, those neurons remained quiet. “These neurons are not reactivated, yet the mice appear to be no longer in pain,” Wang says. “So it suggests this memory of feeling well is transferred somewhere else.”

Goldstein adds that there must be a pain-suppressing neural circuit somewhere that is activated by pain-relief-associated contexts—and the team’s new placebo model sets researchers up to investigate those pathways. A deeper understanding of that circuitry could enable clinicians to deploy the placebo effect—alone or in combination with active treatments—to better manage patients’ pain in the future.

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Unraveling connections between the brain and gut

The brain and the digestive tract are in constant communication, relaying signals that help to control feeding and other behaviors. This extensive communication network also influences our mental state and has been implicated in many neurological disorders.

MIT engineers have designed a new technology for probing those connections. Using fibers embedded with a variety of sensors, as well as light sources for optogenetic stimulation, the researchers have shown that they can control neural circuits connecting the gut and the brain, in mice.

In a new study, the researchers demonstrated that they could induce feelings of fullness or reward-seeking behavior in mice by manipulating cells of the intestine. In future work, they hope to explore some of the correlations that have been observed between digestive health and neurological conditions such as autism and Parkinson’s disease.

“The exciting thing here is that we now have technology that can drive gut function and behaviors such as feeding. More importantly, we have the ability to start accessing the crosstalk between the gut and the brain with the millisecond precision of optogenetics, and we can do it in behaving animals,” says Polina Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering, a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, associate director of MIT’s Research Laboratory of Electronics, and a member of MIT’s McGovern Institute for Brain Research.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

Anikeeva is the senior author of the new study, which appears today in Nature Biotechnology. The paper’s lead authors are MIT graduate student Atharva Sahasrabudhe, Duke University postdoc Laura Rupprecht, MIT postdoc Sirma Orguc, and former MIT postdoc Tural Khudiyev.

The brain-body connection

Last year, the McGovern Institute launched the K. Lisa Yang Brain-Body Center to study the interplay between the brain and other organs of the body. Research at the center focuses on illuminating how these interactions help to shape behavior and overall health, with a goal of developing future therapies for a variety of diseases.

“There’s continuous, bidirectional crosstalk between the body and the brain,” Anikeeva says. “For a long time, we thought the brain is a tyrant that sends output into the organs and controls everything. But now we know there’s a lot of feedback back into the brain, and this feedback potentially controls some of the functions that we have previously attributed exclusively to the central neural control.”

As part of the center’s work, Anikeeva set out to probe the signals that pass between the brain and the nervous system of the gut, also called the enteric nervous system. Sensory cells in the gut influence hunger and satiety via both the neuronal communication and hormone release.

Untangling those hormonal and neural effects has been difficult because there hasn’t been a good way to rapidly measure the neuronal signals, which occur within milliseconds.

“We needed a device that didn’t exist. So, we decided to make it,” says Atharva Sahasrabudhe.

“To be able to perform gut optogenetics and then measure the effects on brain function and behavior, which requires millisecond precision, we needed a device that didn’t exist. So, we decided to make it,” says Sahasrabudhe, who led the development of the gut and brain probes.

The electronic interface that the researchers designed consists of flexible fibers that can carry out a variety of functions and can be inserted into the organs of interest. To create the fibers, Sahasrabudhe used a technique called thermal drawing, which allowed him to create polymer filaments, about as thin as a human hair, that can be embedded with electrodes and temperature sensors.

The filaments also carry microscale light-emitting devices that can be used to optogenetically stimulate cells, and microfluidic channels that can be used to deliver drugs.

The mechanical properties of the fibers can be tailored for use in different parts of the body. For the brain, the researchers created stiffer fibers that could be threaded deep into the brain. For digestive organs such as the intestine, they designed more delicate rubbery fibers that do not damage the lining of the organs but are still sturdy enough to withstand the harsh environment of the digestive tract.

“To study the interaction between the brain and the body, it is necessary to develop technologies that can interface with organs of interest as well as the brain at the same time, while recording physiological signals with high signal-to-noise ratio,” Sahasrabudhe says. “We also need to be able to selectively stimulate different cell types in both organs in mice so that we can test their behaviors and perform causal analyses of these circuits.”

The fibers are also designed so that they can be controlled wirelessly, using an external control circuit that can be temporarily affixed to the animal during an experiment. This wireless control circuit was developed by Orguc, a Schmidt Science Fellow, and Harrison Allen ’20, MEng ’22, who were co-advised between the Anikeeva lab and the lab of Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Driving behavior

Using this interface, the researchers performed a series of experiments to show that they could influence behavior through manipulation of the gut as well as the brain.

First, they used the fibers to deliver optogenetic stimulation to a part of the brain called the ventral tegmental area (VTA), which releases dopamine. They placed mice in a cage with three chambers, and when the mice entered one particular chamber, the researchers activated the dopamine neurons. The resulting dopamine burst made the mice more likely to return to that chamber in search of the dopamine reward.

Then, the researchers tried to see if they could also induce that reward-seeking behavior by influencing the gut. To do that, they used fibers in the gut to release sucrose, which also activated dopamine release in the brain and prompted the animals to seek out the chamber they were in when sucrose was delivered.

Next, working with colleagues from Duke University, the researchers found they could induce the same reward-seeking behavior by skipping the sucrose and optogenetically stimulating nerve endings in the gut that provide input to the vagus nerve, which controls digestion and other bodily functions.

Three scientists holding a fiber in a lab.
Duke University postdoc Laura Rupprecht, MIT graduate student Atharva Sahasrabudhe, and MIT postdoc Sirma Orguc holding their engineered flexible fiber in Polina Anikeeva’s lab at MIT. Photo: Courtesy of the researchers

“Again, we got this place preference behavior that people have previously seen with stimulation in the brain, but now we are not touching the brain. We are just stimulating the gut, and we are observing control of central function from the periphery,” Anikeeva says.

Sahasrabudhe worked closely with Rupprecht, a postdoc in Professor Diego Bohorquez’ group at Duke, to test the fibers’ ability to control feeding behaviors. They found that the devices could optogenetically stimulate cells that produce cholecystokinin, a hormone that promotes satiety. When this hormone release was activated, the animals’ appetites were suppressed, even though they had been fasting for several hours. The researchers also demonstrated a similar effect when they stimulated cells that produce a peptide called PYY, which normally curbs appetite after very rich foods are consumed.

The researchers now plan to use this interface to study neurological conditions that are believed to have a gut-brain connection. For instance, studies have shown that autistic children are far more likely than their peers to be diagnosed with GI dysfunction, while anxiety and irritable bowel syndrome share genetic risks.

“We can now begin asking, are those coincidences, or is there a connection between the gut and the brain? And maybe there is an opportunity for us to tap into those gut-brain circuits to begin managing some of those conditions by manipulating the peripheral circuits in a way that does not directly ‘touch’ the brain and is less invasive,” Anikeeva says.

The research was funded, in part, by the Hock E. Tan and K. Lisa Yang Center for Autism Research and the K. Lisa Yang Brain-Body Center, the National Institute of Neurological Disorders and Stroke, the National Science Foundation (NSF) Center for Materials Science and Engineering, the NSF Center for Neurotechnology, the National Center for Complementary and Integrative Health, a National Institutes of Health Director’s Pioneer Award, the National Institute of Mental Health, and the National Institute of Diabetes and Digestive and Kidney Diseases.

Computational model mimics humans’ ability to predict emotions

When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.

MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them.

To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.

“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Sean Dae Houlihan PhD ’22, a postdoc at the Neukom Institute for Computational Science at Dartmouth College, is the lead author of the paper, which appears today in Philosophical Transactions A. Other authors include Max Kleiman-Weiner PhD ’18, a postdoc at MIT and Harvard University; Luke Hewitt PhD ’22, a visiting scholar at Stanford University; and Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of the Center for Brains, Minds, and Machines and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Predicting emotions

While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, Saxe says. Much more important is the ability to predict someone’s emotional response to events before they occur.

“The most important thing about what it is to understand other people’s emotions is to anticipate what other people will feel before the thing has happened,” she says. “If all of our emotional intelligence was reactive, that would be a catastrophe.”

To try to model how human observers make these predictions, the researchers used scenarios taken from a British game show called “Golden Balls.” On the show, contestants are paired up with a pot of $100,000 at stake. After negotiating with their partner, each contestant decides, secretly, whether to split the pool or try to steal it. If both decide to split, they each receive $50,000. If one splits and one steals, the stealer gets the entire pot. If both try to steal, no one gets anything.

Depending on the outcome, contestants may experience a range of emotions — joy and relief if both contestants split, surprise and fury if one’s opponent steals the pot, and perhaps guilt mingled with excitement if one successfully steals.

To create a computational model that can predict these emotions, the researchers designed three separate modules. The first module is trained to infer a person’s preferences and beliefs based on their action, through a process called inverse planning.

“This is an idea that says if you see just a little bit of somebody’s behavior, you can probabilistically infer things about what they wanted and expected in that situation,” Saxe says.

Using this approach, the first module can predict contestants’ motivations based on their actions in the game. For example, if someone decides to split in an attempt to share the pot, it can be inferred that they also expected the other person to split. If someone decides to steal, they may have expected the other person to steal, and didn’t want to be cheated. Or, they may have expected the other person to split and decided to try to take advantage of them.

The model can also integrate knowledge about specific players, such as the contestant’s occupation, to help it infer the players’ most likely motivation.

The second module compares the outcome of the game with what each player wanted and expected to happen. Then, a third module predicts what emotions the contestants may be feeling, based on the outcome and what was known about their expectations. This third module was trained to predict emotions based on predictions from human observers about how contestants would feel after a particular outcome. The authors emphasize that this is a model of human social intelligence, designed to mimic how observers causally reason about each other’s emotions, not a model of how people actually feel.

“From the data, the model learns that what it means, for example, to feel a lot of joy in this situation, is to get what you wanted, to do it by being fair, and to do it without taking advantage,” Saxe says.

Core intuitions

Once the three modules were up and running, the researchers used them on a new dataset from the game show to determine how the models’ emotion predictions compared with the predictions made by human observers. This model performed much better at that task than any previous model of emotion prediction.

The model’s success stems from its incorporation of key factors that the human brain also uses when predicting how someone else will react to a given situation, Saxe says. Those include computations of how a person will evaluate and emotionally react to a situation, based on their desires and expectations, which relate to not only material gain but also how they are viewed by others.

“Our model has those core intuitions, that the mental states underlying emotion are about what you wanted, what you expected, what happened, and who saw. And what people want is not just stuff. They don’t just want money; they want to be fair, but also not to be the sucker, not to be cheated,” she says.

“The researchers have helped build a deeper understanding of how emotions contribute to determining our actions; and then, by flipping their model around, they explain how we can use people’s actions to infer their underlying emotions. This line of work helps us see emotions not just as ‘feelings’ but as playing a crucial, and subtle, role in human social behavior,” says Nick Chater, a professor of behavioral science at the University of Warwick, who was not involved in the study.

In future work, the researchers hope to adapt the model so that it can perform more general predictions based on situations other than the game-show scenario used in this study. They are also working on creating models that can predict what happened in the game based solely on the expression on the faces of the contestants after the results were announced.

The research was funded by the McGovern Institute; the Paul E. and Lilah Newton Brain Science Award; the Center for Brains, Minds, and Machines; the MIT-IBM Watson AI Lab; and the Multidisciplinary University Research Initiative.

Bionics researchers develop technologies to ease pain and transcend human limitations

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In early December 2022, a middle-aged woman from California arrived at Boston’s Brigham and Women’s Hospital for the amputation of her right leg below the knee following an accident. This was no ordinary procedure. At the end of her remaining leg, surgeons attached a titanium fixture through which they threaded eight thin, electrically conductive wires. These flexible leads, implanted on her leg muscles, would, in the coming months, connect to a robotic, battery-powered prosthetic ankle and foot.

The goal of this unprecedented surgery, driven by MIT researchers from the K. Lisa Yang Center for Bionics at MIT, was the restoration of near-natural function to the patient, enabling her to sense and control the position and motion of her ankle and foot—even with her eyes closed.

In the K. Lisa Yang Center for Bionics, codirector Hugh Herr SM ’93 and graduate student Christopher Shallal are working to return mobility to people disabled by disease or physical trauma. Photo: Tony Luong

“The brain knows exactly how to control the limb, and it doesn’t matter whether it is flesh and bone or made of titanium, silicon, and carbon composite,” says Hugh Herr SM ’93, professor of media arts and sciences, head of the MIT Media Lab’s Biomechatronics Group, codirector of the Yang Center, and an associate member of MIT’s McGovern Institute for Brain Research.

For Herr, in attendance during that long day, the surgery represented a critical milestone in a decades-long mission to develop technologies returning mobility to people disabled by disease or physical trauma. His research combines a dizzying range of disciplines—electrical, mechanical, tissue, and biomedical engineering, as well as neuroscience and robotics—and has yielded pathbreaking results. Herr’s more than 100 patents include a computer-controlled knee and powered ankle-foot prosthesis and have enabled thousands of people around the world to live more on their own terms, including Herr.

Surmounting catastrophe

For much of Herr’s life, “go” meant “up.”

“Starting when I was eight, I developed an extraordinary passion, an absolute obsession, for climbing; it’s all I thought about in life,” says Herr. He aspired “to be the best climber in the world,” a goal he nearly achieved in his teenage years, enthralled by the “purity” of ascending mountains ropeless and solo in record times, by “a vertical dance, a balance between physicality and mind control.”

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

At 17, Herr became disoriented while climbing New Hampshire’s Mt. Washington during a blizzard. Days in the cold permanently damaged his legs, which had to be amputated below his knees. His rescue cost another man’s life, and Herr was despondent, disappointed in himself, and fearful for his future.

Then, following months of rehabilitation, he felt compelled to test himself. His first weekend home, when he couldn’t walk without canes and crutches, he headed back to the mountains. “I hobbled to the base of this vertical cliff and started ascending,” he recalls. “It brought me joy to realize that I was still me, the same person.”

But he also recognized that as a person with amputated limbs, he faced severe disadvantages. “Society doesn’t look kindly on people with unusual bodies; we are viewed as crippled and weak, and that did not sit well with me.” Unable to tolerate both the new physical and social constraints on his life, Herr determined to view his disability not as a loss but as an opportunity. “I think the rage was the catapult that led me to do something that was without precedent,” he says.

Lifelike limb

On hand in the surgical theater in December was a member of Herr’s Biomechatronics Group for whom the bionic limb procedure also held special resonance. Christopher Shallal, a second-year graduate student in the Harvard-MIT Health Sciences and Technology program who received bilateral lower limb amputations at birth, worked alongside surgeon Matthew Carty testing the electric leads before implantation in the patient. Shallal found this, his first direct involvement with a reconstruction surgery, deeply fulfilling.

“Ever since I was a kid, I’ve wanted to do medicine plus engineering,” says Shallal. “I’m really excited to work on this bionic limb reconstruction, which will probably be one of the most advanced systems yet in terms of neural interfacing and control, with a far greater range of motion possible.”

Hugh and Shallal are working on a next-generation, biomimetic limb with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb. Photo: Tony Luong

Like other Herr lab designs, the new prosthesis features onboard, battery-powered propulsion, microprocessors, and tunable actuators. But this next-generation, biomimetic limb represents a major leap forward, replacing electrodes sited on a patient’s skin, subject to sweat and other environmental threats, with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb.

This system takes advantage of a breakthrough technique invented several years ago by the Herr lab called CMI (for cutaneous mechanoneural interface), which constructs muscle-skin-nerve bundles at the amputation site. Muscle actuators controlled by computers on board the external prosthesis apply forces on skin cells implanted within the amputated residuum when a person with amputation touches an object with their prosthesis.

With CMI and electric leads connecting the prosthesis to these muscle actuators within the residual limb, the researchers hypothesize that a person with an amputation will be able to “feel” their prosthetic leg step onto the ground. This sensory capability is the holy grail for persons with major limb loss. After recovery from her surgery, the woman from California will be wearing Herr’s latest state-of-the-art prosthetic system in the lab.

‘Tinkering’ with the body

Not all artificial limbs emulate those that humans are born with. “You can make them however you want, swapping them in and out depending on what you want to do, and they can take you anywhere,” Herr says. Committed to extreme climbing even after his accident, Herr came up with special limbs that became a commercial hit early in his career. His designs made it possible for someone with amputated legs to run and dance.

But he also knew the day-to-day discomfort of navigating on flatter earth with most prostheses. He won his first patent during his senior year of college for a fluid-controlled socket attachment designed to reduce the pain of walking. Growing up in a Mennonite family skilled in handcrafting things they needed, and in a larger community that was disdainful of technology, Herr says he had “difficulty trusting machines.” Yet by the time he began his master’s program at MIT, intent on liberating persons with limb amputation to live more fully in the world, he had embraced the tools of science and engineering as the means to this end.

“I want to be in the business of designing not more and more powerful tools but designing new bodies,” says Hugh Herr.

For Shallal, Herr was an early icon, and his inventions and climbing exploits served as inspiration. “I’d known about Hugh since middle school; he was famous among those with amputations,” he says. “As a kid, I liked tinkering with things, and I kind of saw my body as a canvas, a place where I could explore different boundaries and expand possibilities for myself and others with amputations.” In school, Shallal sometimes encountered resistance to his prostheses. “People would say I couldn’t do certain things, like running and playing different sports, and I found these barriers frustrating,” he says. “I did things in my own way and didn’t want people to pity me.”

In fact, Shallal felt he could do some things better than his peers. In high school, he used a 3-D printer to make a mobile phone charger case he could plug into his prosthesis. “As a kid, I would wear long pants to hide my legs, but as the technology got cooler, I started wearing shorts,” he says. “I got comfortable and liked kind of showing off my legs.”

Global impact

December’s surgery was the first phase in the bionic limb project. Shallal will be following up with the patient over many months, ensuring that the connections between her limb and implanted sensors function and provide appropriate sensorimotor data for the built-in processor. Research on this and other patients to determine the impact of these limbs on gait and ease of managing slopes, for instance, will form the basis for Shallal’s dissertation.

“After graduation, I’d be really interested in translating technology out of the lab, maybe doing a startup related to neural interfacing technology,” he says. “I watched Inspector Gadget on television when I was a kid. Making the tool you need at the time you need it to fix problems would be my dream.”

Herr will be overseeing Shallal’s work, as well as a suite of research efforts propelled by other graduate students, postdocs, and research scientists that together promise to strengthen the technology behind this generation of biomimetic prostheses.

One example: devising an innovative method for measuring muscle length and velocity with tiny implanted magnets. In work published in November 2022, researchers including Herr; project lead Cameron Taylor SM ’16, PhD ’20, a research associate in the Biomechatronics Group; and Brown University partners demonstrated that this new tool, magnetomicrometry, yields the kind of high-resolution data necessary for even more precise bionic limb control. The Herr lab awaits FDA approval on human implantation of the magnetic beads.

These intertwined initiatives are central to the ambitious mission of the K. Lisa Yang Center for Bionics, established with a $24 million gift from Yang in 2021 to tackle transformative bionic interventions to address an extensive range of human limitations.

Herr is committed to making the broadest possible impact with his technologies. “Shoes and braces hurt, so my group is developing the science of comfort—designing mechanical parts that attach to the body and transfer loads without causing pain.” These inventions may prove useful not just to people living with amputation but to patients suffering from arthritis or other diseases affecting muscles, joints, and bones, whether in lower limbs or arms and hands.

The Yang Center aims to make prosthetic and orthotic devices more accessible globally, so Herr’s group is ramping up services in Sierra Leone, where civil war left tens of thousands missing limbs after devastating machete attacks. “We’re educating clinicians, helping with supply chain infrastructure, introducing novel assistive technology, and developing mobile delivery platforms,” he says.

In the end, says Herr, “I want to be in the business of designing not more and more powerful tools but designing new bodies.” Herr uses himself as an example: “I walk on two very powerful robots, but they’re not linked to my skeleton, or to my brain, so when I walk it feels like I’m on powerful machines that are not me. What I want is such a marriage between human physiology and electromechanics that a person feels at one with the synthetic, designed content of their body.” and control, with a far greater range of motion possible.”

Mehrdad Jazayeri wants to know how our brains model the external world

Much of our daily life requires us to make inferences about the world around us. As you think about which direction your tennis opponent will hit the ball, or try to figure out why your child is crying, your brain is searching for answers about possibilities that are not directly accessible through sensory experiences.

MIT Associate Professor Mehrdad Jazayeri has devoted most of his career to exploring how the brain creates internal representations, or models, of the external world to make intelligent inferences about hidden states of the world.

“The one question I am most interested in is how does the brain form internal models of the external world? Studying inference is really a powerful way of gaining insight into these internal models,” says Jazayeri, who recently earned tenure in the Department of Brain and Cognitive Sciences and is also a member of MIT’s McGovern Institute for Brain Research.

Using a variety of approaches, including detailed analysis of behavior, direct recording of activity of neurons in the brain, and mathematical modeling, he has discovered how the brain builds models of statistical regularities in the environment. He has also found circuits and mechanisms that enable the brain to capture the causal relationships between observations and outcomes.

An unusual path

Jazayeri, who has been on the faculty at MIT since 2013, took an unusual path to a career in neuroscience. Growing up in Tehran, Iran, he was an indifferent student until his second year of high school when he got interested in solving challenging geometry puzzles. He also started programming with the ZX Spectrum, an early 8-bit personal computer, that his father had given him.

During high school, he was chosen to train for Iran’s first ever National Physics Olympiad team, but when he failed to make it to the international team, he became discouraged and temporarily gave up on the idea of going to college. Eventually, he participated in the University National Entrance Exam and was admitted to the electrical engineering department at Sharif University of Technology.

Jazayeri didn’t enjoy his four years of college education. The experience mostly helped him realize that he was not meant to become an engineer. “I realized that I’m not an inventor. What inspires me is the process of discovery,” he says. “I really like to figure things out, not build things, so those four years were not very inspiring.”

After graduating from college, Jazayeri spent a few years working on a banana farm near the Caspian Sea, along with two friends. He describes those years as among the best and most formative of his life. He would wake by 4 a.m., work on the farm until late afternoon, and spend the rest of the day thinking and reading. One topic he read about with great interest was neuroscience, which led him a few years later to apply to graduate school.

He immigrated to Canada and was admitted to the University of Toronto, where he earned a master’s degree in physiology and neuroscience. While there, he worked on building small circuit models that would mimic the activity of neurons in the hippocampus.

From there, Jazayeri went on to New York University to earn a PhD in neuroscience, where he studied how signals in the visual cortex support perception and decision-making. “I was less interested in how the visual cortex encodes the external world,” he says. “I wanted to understand how the rest of the brain decodes the signals in visual cortex, which is, in effect, an inference problem.”

He continued pursuing his interest in the neurobiology of inference as a postdoc at the University of Washington, where he investigated how the brain uses temporal regularities in the environment to estimate time intervals, and uses knowledge about those intervals to plan for future actions.

Building internal models to make inferences

Inference is the process of drawing conclusions based on information that is not readily available. Making rich inferences from scarce data is one of humans’ core mental capacities, one that is central to what makes us the most intelligent species on Earth. To do so, our nervous system builds internal models of the external world, and those models that help us think through possibilities without directly experiencing them.

The problem of inferences presents itself in many behavioral settings.

“Our nervous system makes all sorts of internal models for different behavioral goals, some that capture the statistical regularities in the environment, some that link potential causes to effects, some that reflect relationships between entities, and some that enable us to think about others,” Jazayeri says.

Jazayeri’s lab at MIT is made up of a group of cognitive scientists, electrophysiologists, engineers, and physicists with a shared interest in understanding the nature of internal models in the brain and how those models enable us to make inferences in different behavioral tasks.

Early work in the lab focused on a simple timing task to examine the problem of statistical inference, that is, how we use statistical regularities in the environment to make accurate inference. First, they found that the brain coordinates movements in time using a dynamic process, akin to an analog timer. They also found that the neural representation of time in the frontal cortex is being continuously calibrated based on prior experience so that we can make more accurate time estimates in the presence of uncertainty.

Later, the lab developed a complex decision-making task to examine the neural basis of causal inference, or the process of deducing a hidden cause based on its effects. In a paper that appeared in 2019, Jazayeri and his colleagues identified a hierarchical and distributed brain circuit in the frontal cortex that helps the brain to determine the most probable cause of failure within a hierarchy of decisions.

More recently, the lab has extended its investigation to other behavioral domains, including relational inference and social inference. Relational inference is about situating an ambiguous observation using relational memory. For example, coming out of a subway in a new neighborhood, we may use our knowledge of the relationship between visible landmarks to infer which way is north. Social inference, which is extremely difficult to study, involves deducing other people’s beliefs and goals based on their actions.

Along with studies in human volunteers and animal models, Jazayeri’s lab develops computational models based on neural networks, which helps them to test different possible hypotheses of how the brain performs specific tasks. By comparing the activity of those models with neural activity data from animals, the researchers can gain insight into how the brain actually performs a particular type of inference task.

“My main interest is in how the brain makes inferences about the world based on the neural signals,” Jazayeri says. “All of my work is about looking inside the brain, measuring signals, and using mathematical tools to try to understand how those signals are manifestations of an internal model within the brain.”

A hunger for social contact

Since the coronavirus pandemic began in the spring, many people have only seen their close friends and loved ones during video calls, if at all. A new study from MIT finds that the longings we feel during this kind of social isolation share a neural basis with the food cravings we feel when hungry.

The researchers found that after one day of total isolation, the sight of people having fun together activates the same brain region that lights up when someone who hasn’t eaten all day sees a picture of a plate of cheesy pasta.

“People who are forced to be isolated crave social interactions similarly to the way a hungry person craves food.”

“Our finding fits the intuitive idea that positive social interactions are a basic human need, and acute loneliness is an aversive state that motivates people to repair what is lacking, similar to hunger,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The research team collected the data for this study in 2018 and 2019, long before the coronavirus pandemic and resulting lockdowns. Their new findings, described today in Nature Neuroscience, are part of a larger research program focusing on how social stress affects people’s behavior and motivation.

Former MIT postdoc Livia Tomova, who is now a research associate at Cambridge University, is the lead author of the paper. Other authors include Kimberly Wang, a McGovern Institute research associate; Todd Thompson, a McGovern Institute scientist; Atsushi Takahashi, assistant director of the Martinos Imaging Center; Gillian Matthews, a research scientist at the Salk Institute for Biological Studies; and Kay Tye, a professor at the Salk Institute.

Social craving

The new study was partly inspired by a recent paper from Tye, a former member of MIT’s Picower Institute for Learning and Memory. In that 2016 study, she and Matthews, then an MIT postdoc, identified a cluster of neurons in the brains of mice that represent feelings of loneliness and generate a drive for social interaction following isolation. Studies in humans have shown that being deprived of social contact can lead to emotional distress, but the neurological basis of these feelings is not well-known.

“We wanted to see if we could experimentally induce a certain kind of social stress, where we would have control over what the social stress was,” Saxe says. “It’s a stronger intervention of social isolation than anyone had tried before.”

To create that isolation environment, the researchers enlisted healthy volunteers, who were mainly college students, and confined them to a windowless room on MIT’s campus for 10 hours. They were not allowed to use their phones, but the room did have a computer that they could use to contact the researchers if necessary.

“There were a whole bunch of interventions we used to make sure that it would really feel strange and different and isolated,” Saxe says. “They had to let us know when they were going to the bathroom so we could make sure it was empty. We delivered food to the door and then texted them when it was there so they could go get it. They really were not allowed to see people.”

After the 10-hour isolation ended, each participant was scanned in an MRI machine. This posed additional challenges, as the researchers wanted to avoid any social contact during the scanning. Before the isolation period began, each subject was trained on how to get into the machine, so that they could do it by themselves, without any help from the researcher.

“Normally, getting somebody into an MRI machine is actually a really social process. We engage in all kinds of social interactions to make sure people understand what we’re asking them, that they feel safe, that they know we’re there,” Saxe says. “In this case, the subjects had to do it all by themselves, while the researcher, who was gowned and masked, just stood silently by and watched.”

Each of the 40 participants also underwent 10 hours of fasting, on a different day. After the 10-hour period of isolation or fasting, the participants were scanned while looking at images of food, images of people interacting, and neutral images such as flowers. The researchers focused on a part of the brain called the substantia nigra, a tiny structure located in the midbrain, which has previously been linked with hunger cravings and drug cravings. The substantia nigra is also believed to share evolutionary origins with a brain region in mice called the dorsal raphe nucleus, which is the area that Tye’s lab showed was active following social isolation in their 2016 study.

The researchers hypothesized that when socially isolated subjects saw photos of people enjoying social interactions, the “craving signal” in their substantia nigra would be similar to the signal produced when they saw pictures of food after fasting. This was indeed the case. Furthermore, the amount of activation in the substantia nigra was correlated with how strongly the patients rated their feelings of craving either food or social interaction.

Degrees of loneliness

The researchers also found that people’s responses to isolation varied depending on their normal levels of loneliness. People who reported feeling chronically isolated months before the study was done showed weaker cravings for social interaction after the 10-hour isolation period than people who reported a richer social life.

“For people who reported that their lives were really full of satisfying social interactions, this intervention had a bigger effect on their brains and on their self-reports,” Saxe says.

The researchers also looked at activation patterns in other parts of the brain, including the striatum and the cortex, and found that hunger and isolation each activated distinct areas of those regions. That suggests that those areas are more specialized to respond to different types of longings, while the substantia nigra produces a more general signal representing a variety of cravings.

Now that the researchers have established that they can observe the effects of social isolation on brain activity, Saxe says they can now try to answer many additional questions. Those questions include how social isolation affect people’s behavior, whether virtual social contacts such as video calls help to alleviate cravings for social interaction, and how isolation affects different age groups.

The researchers also hope to study whether the brain responses that they saw in this study could be used to predict how the same participants responded to being isolated during the lockdowns imposed during the early stages of the coronavirus pandemic.

The research was funded by a SFARI Explorer Grant from the Simons Foundation, a MINT grant from the McGovern Institute, the National Institutes of Health, including an NIH Pioneer Award, a Max Kade Foundation Fellowship, and an Erwin Schroedinger Fellowship from the Austrian Science Fund.