New MIT initiative seeks to transform rare brain disorders research

More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.

Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill the gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT’s McGovern Institute, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.

“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”

Building new coalitions

Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.

Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”

Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute for Brain Research at MIT, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.

Image of SHANK3 therapy correctly finding its way to dendrites. Image: Guoping Feng
An early version of a gene therapy for SHANK3 mutations — linked to a rare brain disorder called Phelan-McDermid syndrome — correctly finds its way to neurons. Image: Feng lab

RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.

MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.

These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”

“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions and to do so at a moment when it’s needed more than ever.”

 

MIT cognitive scientists reveal why some sentences stand out from others

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

Greta Tuckute, a former graduate student in the Fedorenko lab. Photo: Caitlin Cunningham

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

International neuroscience collaboration unveils comprehensive cellular-resolution map of brain activity

The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. Researchers from the International Brain Laboratory (IBL), including McGovern Investigator Ila Fiete, published their findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains Co-Founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half a million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95% of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a Group Leader at the University of Geneva.

Brain-wide map showing 75,000 analyzed neurons lighting up during different stages of decision-making. At the beginning of the trial, the activity is quiet. Then it builds up in the visual areas at the back of the brain, followed by a rise in activity spreading across the brain as evidence accumulates towards a decision. Next, motor areas light up as there is movement onset and finally there is a spike in activity everywhere in the brain as the animal is rewarded.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called Neuropixels probes,  for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brainwide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision making.”

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in future.

Flat maps of the mouse brain showing which areas have significant changes in activity during each of three task intervals. Credit: Michael Schartner & International Brain Laboratory

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations, our beliefs about what is likely to happen based on our recent experience, are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: if it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world is shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome (209558 and 216324), the Simons Foundation, The National Institutes of Health (NIH U19NS12371601), the National Science Foundation (NSF 1707398), the Gatsby Charitable Foundation (GAT3708), andby the Max Planck Society and the Humboldt Foundation.

 

Searching for self

This story also appears in the Fall 2025 issue of BrainScan

___

The question of how we know ourselves might seem the subject of philosophers, but it is just as much a matter of biology. As modern neuroscientists obtain an increasingly sophisticated understanding of how the brain generates emotions, responds to the external world, and learns from experience, some researchers are returning to a central question: How do we know our experiences, emotions, and physical sensations belong to us?

Curiosity about how the brain generates our sense of self has been a driving force for the research of McGovern Investigator Fan Wang. Following that curiosity has drawn Wang into diverse studies, exploring the origins of pain and the mechanisms we use to control our movements.

“We cannot pinpoint a set of active neurons and say that’s the sense of self. That still remains a mystery,” says Wang, who is also a professor of brain and cognitive sciences and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. But she and other neuroscientists are drilling down into different functions of the brain that together might generate our awareness of ourselves.

Woman wearing blue blazer smiles and gestures off camera with man in white lab coat seated next to her.
McGovern Investigator Fan Wang (right) with research scientist Vincent Prevosto, who studies brain regions implicated in whisker movement. Photo: Steph Stevens

Wang, who teaches the undergraduate course, “Neurobiology of Self,” explains that there are lots of ways to think about our sense of self, which are probably deeply integrated in the brain. Some are mostly about our physical bodies: How do we experience touch? How do we understand
where we are in space, or recognize the boundary between ourselves and rest of the world? Some consider more internal sensations, like how we experience pain or hunger. Emotion is also key to our sense of self: How do we know that anger or joy are our own, and why do these states change the way our bodies feel?

Wang can trace her initial interest in the brain’s sense of self to work she did as a graduate student in Richard Axel’s lab at Columbia University. The lab had identified receptors expressed by sensory neurons in the nose that detect odorous substances. Wang and others discovered the pathways that information about these smells takes to the brain, and how the brain distinguishes one smell from another.

Who is the “knower” of this information? “The answer,” Wang says, “is ‘I’ or ‘me.’ But understanding where I get the sense of self and how that is constructed, is what drives me to do neuroscience.”

Mechanisms of movement

In her lab at the McGovern Institute, Wang is studying how the brain controls the body’s movements, which she sees as closely tied to the awareness of our physical selves. “The reason I think I am in my body is because I can control my movement. I generate the movement. I cannot control your movement,” says Wang. “Volitional movement gives us a sense of agency, and this sense of agency resembles the sense of self.” For the mice that the group studies, one crucial type of movement comes from the whiskers, which the animals depend on as they explore their environments. Wang’s group has traced the neural circuity that controls whiskers’ rhythmic back-and-forth, which is initiated in the brainstem, where many of the body’s most vital functions are controlled. Wang describes the simple circuit as an oscillator, or a self-generated loop.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

Once it’s started, “the movement can go on unless some other signals stop it,” she says. The movement the circuit generates is simple but voluntary, and can be fine-tuned based on the sensory feedback the whiskers relay back to the brain. They’ve also been investigating how mice move the larynx to generate the squeaks and calls they use to communicate. These intentional movements must be coordinated with the ongoing cycles of respiration since we produce normal sounds only during expiration. Wang’s team has found neurons in the brainstem that generate vocalization-specific movements, and also discovered how respiration-controlling neural circuits can override them, ensuring that breathing is prioritized.

Wang says understanding the circuitry that controls these simple movements sets the stage for figuring out how the brain modifies activity in those circuits to create more complex, intentional movements. “That brings me closer to understanding where this volition is generated — and closer to this sense of self,” she says.

Emotional pain

Still, she knows that volitional movements — even those generated in response to perceptions of the environment — do not, on their own, define a sense of self. As a counterexample, she looks to self-driving cars: “There’s sensory information coming into the central computer, which then generates a motor output — where to drive, where to turn, where to stop. But none of us think a Waymo taxi has a sense of self.”

Wang says when she pondered the ways in which AI-powered cars lack a sense of self, she began thinking about emotions and pain. “If the self-driving Waymo crashes, it will not feel pain,” she says. “But if we hurt ourselves, we will feel pain. And we will hate that, and then we’ll learn.” So her lab is also exploring how the nervous system generates pain perception, including the emotional response that it evokes.

Ensembles of neurons in the amygdala activated by general anesthesia. Image: Fan Wang

In both humans and mice, pain causes emotional suffering that can be recognized and measured through changes in body functions like heart rate and blood pressure. With funding from the K. Lisa Yang Brain-Body Center at MIT, Wang’s lab is carefully tracking these involuntary, or autonomic, functions to gain a more complete understanding of pain’s emotional impact. This approach has helped clarify the role of pain-suppressing neurons in the brain’s amygdala — an important emotion-processing center — that Wang’s team discovered in 2020. When researchers selectively activate those cells in mice, the animals’ behavior makes it clear that the neurons are suppressing pain. Now, the group has learned that activating these neurons suppresses the autonomic response to pain.

Wang says there’s hope that modulating pain’s emotional response might be a way to treat chronic pain in patients. She explains that some patients with damage to another one of the brain’s emotional centers, the cingulate cortex, feel painful stimuli, but experience them as merely intense sensations. That suggests that it might be possible to modulate the emotional response to pain to eliminate patients’ suffering, without blocking the protective information that pain can provide.

The team has also been focusing on another set of anesthesia-activated neurons, which they have found suppress anxiety. When anxiety-suppressing neurons are activated in mice, the animals’ heart rates slow and they become more willing to explore bright, open spaces. Another anxiety-associated measure — heart rate variability — increases. Wang explains that this change is particularly significant: “If you have persistent low heart rate variability, especially in veterans, that is a very good predictor for anxiety developing into depression in the future,” she says.

The team’s findings, which suggest that changes in autonomic functions may themselves relieve anxiety, point toward potential new targets for anti-anxiety therapies. And by highlighting the connection between emotion and bodily responses, they offer more clues about our sense of self. “These neurons are now changing some high-level concept about anxiety,” Wang points out.

That link between emotion and body seems to Wang to be key to the sense of self. The big questions remain unanswered, but that simply stokes her curiosity. “I can be aware of my bodily responses: I am aware of ‘I am anxious’ or ‘I am in pain.’ I can see the pathways from which stimuli go into these nervous systems and come back down to the body and control the response. But I still don’t know who is the person — the knower,” she says. “I haven’t found it, so I’m going to keep looking.”

Polina Anikeeva named 2024 Blavatnik Award Finalist

The Blavatnik Family Foundation and New York Academy of Sciences has announced the honorees of the 2024 Blavatnik National Awards, and McGovern Investigator Polina Anikeeva is among five finalists in the category of physical sciences and engineering.

Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering at MIT, works at the intersection of materials science, electronics, and neurobiology to improve our understanding of brain-body communication. She is head of MIT’s Materials Science and Engineering Department, and is also a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, and associate director of the Research Laboratory of Electronics. Anikeeva’s lab has developed ultrathin, flexible fibers that probe the flow of information between the brain and peripheral organs in the body. Her ultimate goal is to develop novel technologies to achieve healthy minds in healthy bodies.

The Blavatnik National Awards for Young Scientists is the largest unrestricted scientific prize offered to America’s most promising, faculty-level scientific researchers under 42. The 2024 Blavatnik National Awards received 331 nominations from 172 institutions in 43 US states and selected three women scientists as laureates (Cigall Kadoch, Dana Farber Cancer Institute; Markita del Carpio Landry, UC Berkeley; and Britney Schmidt, Cornell University). An additional 15 finalists, including two from MIT: Anikeeva and Yogesh Surendranath will also receive monetary prizes.

“On behalf of the Blavatnik Family Foundation, I congratulate this year’s outstanding laureates and finalists for their exceptional research. They are among the preeminent leaders of the next generation of scientific innovation and discovery,” said Len Blavatnik, founder of Access Industries and the Blavatnik Family Foundation and a member of the President’s Council of The New York Academy of Sciences.

The Blavatnik National Awards for Young Scientists will celebrate the 2024 laureates and finalists in a gala ceremony on October 1, 2024, at the American Museum of Natural History in New York.

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Exposure to different kinds of music influences how the brain interprets rhythm

When listening to music, the human brain appears to be biased toward hearing and producing rhythms composed of simple integer ratios — for example, a series of four beats separated by equal time intervals (forming a 1:1:1 ratio).

However, the favored ratios can vary greatly between different societies, according to a large-scale study led by researchers at MIT and the Max Planck Institute for Empirical Aesthetics and carried out in 15 countries. The study included 39 groups of participants, many of whom came from societies whose traditional music contains distinctive patterns of rhythm not found in Western music.

“Our study provides the clearest evidence yet for some degree of universality in music perception and cognition, in the sense that every single group of participants that was tested exhibits biases for integer ratios. It also provides a glimpse of the variation that can occur across cultures, which can be quite substantial,” says Nori Jacoby, the study’s lead author and a former MIT postdoc, who is now a research group leader at the Max Planck Institute for Empirical Aesthetics in Frankfurt, Germany.

The brain’s bias toward simple integer ratios may have evolved as a natural error-correction system that makes it easier to maintain a consistent body of music, which human societies often use to transmit information.

“When people produce music, they often make small mistakes. Our results are consistent with the idea that our mental representation is somewhat robust to those mistakes, but it is robust in a way that pushes us toward our preexisting ideas of the structures that should be found in music,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines.

McDermott is the senior author of the study, which appears today in Nature Human Behaviour. The research team also included scientists from more than two dozen institutions around the world.

A global approach

The new study grew out of a smaller analysis that Jacoby and McDermott published in 2017. In that paper, the researchers compared rhythm perception in groups of listeners from the United States and the Tsimane’, an Indigenous society located in the Bolivian Amazon rainforest.

pitch perception study
Nori Jacoby, a former MIT postdoc now at the Max Planck Institute for Empirical Aesthetics, runs an experiment with a member of the Tsimane’ tribe, who have had little exposure to Western music. Photo: Josh McDermott

To measure how people perceive rhythm, the researchers devised a task in which they play a randomly generated series of four beats and then ask the listener to tap back what they heard. The rhythm produced by the listener is then played back to the listener, and they tap it back again. Over several iterations, the tapped sequences became dominated by the listener’s internal biases, also known as priors.

“The initial stimulus pattern is random, but at each iteration the pattern is pushed by the listener’s biases, such that it tends to converge to a particular point in the space of possible rhythms,” McDermott says. “That can give you a picture of what we call the prior, which is the set of internal implicit expectations for rhythms that people have in their heads.”

When the researchers first did this experiment, with American college students as the test subjects, they found that people tended to produce time intervals that are related by simple integer ratios. Furthermore, most of the rhythms they produced, such as those with ratios of 1:1:2 and 2:3:3, are commonly found in Western music.

The researchers then went to Bolivia and asked members of the Tsimane’ society to perform the same task. They found that Tsimane’ also produced rhythms with simple integer ratios, but their preferred ratios were different and appeared to be consistent with those that have been documented in the few existing records of Tsimane’ music.

“At that point, it provided some evidence that there might be very widespread tendencies to favor these small integer ratios, and that there might be some degree of cross-cultural variation. But because we had just looked at this one other culture, it really wasn’t clear how this was going to look at a broader scale,” Jacoby says.

To try to get that broader picture, the MIT team began seeking collaborators around the world who could help them gather data on a more diverse set of populations. They ended up studying listeners from 39 groups, representing 15 countries on five continents — North America, South America, Europe, Africa, and Asia.

“This is really the first study of its kind in the sense that we did the same experiment in all these different places, with people who are on the ground in those locations,” McDermott says. “That hasn’t really been done before at anything close to this scale, and it gave us an opportunity to see the degree of variation that might exist around the world.”

A grid of nine different photos showing a researcher working with an individual at a table. The individuals are wearing headphones.
Example testing sites. a, Yaranda, Bolivia. b, Montevideo, Uruguay. c, Sagele, Mali. d, Spitzkoppe, Namibia. e, Pleven, Bulgaria. f, Bamako, Mali. g, D’Kar, Botswana. h, Stockholm, Sweden. i, Guizhou, China. j, Mumbai, India. Verbal informed consent was obtained from the individuals in each photo.

Cultural comparisons

Just as they had in their original 2017 study, the researchers found that in every group they tested, people tended to be biased toward simple integer ratios of rhythm. However, not every group showed the same biases. People from North America and Western Europe, who have likely been exposed to the same kinds of music, were more likely to generate rhythms with the same ratios. However, many groups, for example those in Turkey, Mali, Bulgaria, and Botswana showed a bias for other rhythms.

“There are certain cultures where there are particular rhythms that are prominent in their music, and those end up showing up in the mental representation of rhythm,” Jacoby says.

The researchers believe their findings reveal a mechanism that the brain uses to aid in the perception and production of music.

“When you hear somebody playing something and they have errors in their performance, you’re going to mentally correct for those by mapping them onto where you implicitly think they ought to be,” McDermott says. “If you didn’t have something like this, and you just faithfully represented what you heard, these errors might propagate and make it much harder to maintain a musical system.”

Among the groups that they studied, the researchers took care to include not only college students, who are easy to study in large numbers, but also people living in traditional societies, who are more difficult to reach. Participants from those more traditional groups showed significant differences from college students living in the same countries, and from people who live in those countries but performed the test online.

“What’s very clear from the paper is that if you just look at the results from undergraduate students around the world, you vastly underestimate the diversity that you see otherwise,” Jacoby says. “And the same was true of experiments where we tested groups of people online in Brazil and India, because you’re dealing with people who have internet access and presumably have more exposure to Western music.”

The researchers now hope to run additional studies of different aspects of music perception, taking this global approach.

“If you’re just testing college students around the world or people online, things look a lot more homogenous. I think it’s very important for the field to realize that you actually need to go out into communities and run experiments there, as opposed to taking the low-hanging fruit of running studies with people in a university or on the internet,” McDermott says.

The research was funded by the James S. McDonnell Foundation, the Canadian National Science and Engineering Research Council, the South African National Research Foundation, the United States National Science Foundation, the Chilean National Research and Development Agency, the Austrian Academy of Sciences, the Japan Society for the Promotion of Science, the Keio Global Research Institute, the United Kingdom Arts and Humanities Research Council, the Swedish Research Council, and the John Fell Fund.

Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Unraveling connections between the brain and gut

The brain and the digestive tract are in constant communication, relaying signals that help to control feeding and other behaviors. This extensive communication network also influences our mental state and has been implicated in many neurological disorders.

MIT engineers have designed a new technology for probing those connections. Using fibers embedded with a variety of sensors, as well as light sources for optogenetic stimulation, the researchers have shown that they can control neural circuits connecting the gut and the brain, in mice.

In a new study, the researchers demonstrated that they could induce feelings of fullness or reward-seeking behavior in mice by manipulating cells of the intestine. In future work, they hope to explore some of the correlations that have been observed between digestive health and neurological conditions such as autism and Parkinson’s disease.

“The exciting thing here is that we now have technology that can drive gut function and behaviors such as feeding. More importantly, we have the ability to start accessing the crosstalk between the gut and the brain with the millisecond precision of optogenetics, and we can do it in behaving animals,” says Polina Anikeeva, the Matoula S. Salapatas Professor in Materials Science and Engineering, a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, associate director of MIT’s Research Laboratory of Electronics, and a member of MIT’s McGovern Institute for Brain Research.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

Anikeeva is the senior author of the new study, which appears today in Nature Biotechnology. The paper’s lead authors are MIT graduate student Atharva Sahasrabudhe, Duke University postdoc Laura Rupprecht, MIT postdoc Sirma Orguc, and former MIT postdoc Tural Khudiyev.

The brain-body connection

Last year, the McGovern Institute launched the K. Lisa Yang Brain-Body Center to study the interplay between the brain and other organs of the body. Research at the center focuses on illuminating how these interactions help to shape behavior and overall health, with a goal of developing future therapies for a variety of diseases.

“There’s continuous, bidirectional crosstalk between the body and the brain,” Anikeeva says. “For a long time, we thought the brain is a tyrant that sends output into the organs and controls everything. But now we know there’s a lot of feedback back into the brain, and this feedback potentially controls some of the functions that we have previously attributed exclusively to the central neural control.”

As part of the center’s work, Anikeeva set out to probe the signals that pass between the brain and the nervous system of the gut, also called the enteric nervous system. Sensory cells in the gut influence hunger and satiety via both the neuronal communication and hormone release.

Untangling those hormonal and neural effects has been difficult because there hasn’t been a good way to rapidly measure the neuronal signals, which occur within milliseconds.

“We needed a device that didn’t exist. So, we decided to make it,” says Atharva Sahasrabudhe.

“To be able to perform gut optogenetics and then measure the effects on brain function and behavior, which requires millisecond precision, we needed a device that didn’t exist. So, we decided to make it,” says Sahasrabudhe, who led the development of the gut and brain probes.

The electronic interface that the researchers designed consists of flexible fibers that can carry out a variety of functions and can be inserted into the organs of interest. To create the fibers, Sahasrabudhe used a technique called thermal drawing, which allowed him to create polymer filaments, about as thin as a human hair, that can be embedded with electrodes and temperature sensors.

The filaments also carry microscale light-emitting devices that can be used to optogenetically stimulate cells, and microfluidic channels that can be used to deliver drugs.

The mechanical properties of the fibers can be tailored for use in different parts of the body. For the brain, the researchers created stiffer fibers that could be threaded deep into the brain. For digestive organs such as the intestine, they designed more delicate rubbery fibers that do not damage the lining of the organs but are still sturdy enough to withstand the harsh environment of the digestive tract.

“To study the interaction between the brain and the body, it is necessary to develop technologies that can interface with organs of interest as well as the brain at the same time, while recording physiological signals with high signal-to-noise ratio,” Sahasrabudhe says. “We also need to be able to selectively stimulate different cell types in both organs in mice so that we can test their behaviors and perform causal analyses of these circuits.”

The fibers are also designed so that they can be controlled wirelessly, using an external control circuit that can be temporarily affixed to the animal during an experiment. This wireless control circuit was developed by Orguc, a Schmidt Science Fellow, and Harrison Allen ’20, MEng ’22, who were co-advised between the Anikeeva lab and the lab of Anantha Chandrakasan, dean of MIT’s School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

Driving behavior

Using this interface, the researchers performed a series of experiments to show that they could influence behavior through manipulation of the gut as well as the brain.

First, they used the fibers to deliver optogenetic stimulation to a part of the brain called the ventral tegmental area (VTA), which releases dopamine. They placed mice in a cage with three chambers, and when the mice entered one particular chamber, the researchers activated the dopamine neurons. The resulting dopamine burst made the mice more likely to return to that chamber in search of the dopamine reward.

Then, the researchers tried to see if they could also induce that reward-seeking behavior by influencing the gut. To do that, they used fibers in the gut to release sucrose, which also activated dopamine release in the brain and prompted the animals to seek out the chamber they were in when sucrose was delivered.

Next, working with colleagues from Duke University, the researchers found they could induce the same reward-seeking behavior by skipping the sucrose and optogenetically stimulating nerve endings in the gut that provide input to the vagus nerve, which controls digestion and other bodily functions.

Three scientists holding a fiber in a lab.
Duke University postdoc Laura Rupprecht, MIT graduate student Atharva Sahasrabudhe, and MIT postdoc Sirma Orguc holding their engineered flexible fiber in Polina Anikeeva’s lab at MIT. Photo: Courtesy of the researchers

“Again, we got this place preference behavior that people have previously seen with stimulation in the brain, but now we are not touching the brain. We are just stimulating the gut, and we are observing control of central function from the periphery,” Anikeeva says.

Sahasrabudhe worked closely with Rupprecht, a postdoc in Professor Diego Bohorquez’ group at Duke, to test the fibers’ ability to control feeding behaviors. They found that the devices could optogenetically stimulate cells that produce cholecystokinin, a hormone that promotes satiety. When this hormone release was activated, the animals’ appetites were suppressed, even though they had been fasting for several hours. The researchers also demonstrated a similar effect when they stimulated cells that produce a peptide called PYY, which normally curbs appetite after very rich foods are consumed.

The researchers now plan to use this interface to study neurological conditions that are believed to have a gut-brain connection. For instance, studies have shown that autistic children are far more likely than their peers to be diagnosed with GI dysfunction, while anxiety and irritable bowel syndrome share genetic risks.

“We can now begin asking, are those coincidences, or is there a connection between the gut and the brain? And maybe there is an opportunity for us to tap into those gut-brain circuits to begin managing some of those conditions by manipulating the peripheral circuits in a way that does not directly ‘touch’ the brain and is less invasive,” Anikeeva says.

The research was funded, in part, by the Hock E. Tan and K. Lisa Yang Center for Autism Research and the K. Lisa Yang Brain-Body Center, the National Institute of Neurological Disorders and Stroke, the National Science Foundation (NSF) Center for Materials Science and Engineering, the NSF Center for Neurotechnology, the National Center for Complementary and Integrative Health, a National Institutes of Health Director’s Pioneer Award, the National Institute of Mental Health, and the National Institute of Diabetes and Digestive and Kidney Diseases.

Computational model mimics humans’ ability to predict emotions

When interacting with another person, you likely spend part of your time trying to anticipate how they will feel about what you’re saying or doing. This task requires a cognitive skill called theory of mind, which helps us to infer other people’s beliefs, desires, intentions, and emotions.

MIT neuroscientists have now designed a computational model that can predict other people’s emotions — including joy, gratitude, confusion, regret, and embarrassment — approximating human observers’ social intelligence. The model was designed to predict the emotions of people involved in a situation based on the prisoner’s dilemma, a classic game theory scenario in which two people must decide whether to cooperate with their partner or betray them.

To build the model, the researchers incorporated several factors that have been hypothesized to influence people’s emotional reactions, including that person’s desires, their expectations in a particular situation, and whether anyone was watching their actions.

“These are very common, basic intuitions, and what we said is, we can take that very basic grammar and make a model that will learn to predict emotions from those features,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Sean Dae Houlihan PhD ’22, a postdoc at the Neukom Institute for Computational Science at Dartmouth College, is the lead author of the paper, which appears today in Philosophical Transactions A. Other authors include Max Kleiman-Weiner PhD ’18, a postdoc at MIT and Harvard University; Luke Hewitt PhD ’22, a visiting scholar at Stanford University; and Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of the Center for Brains, Minds, and Machines and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Predicting emotions

While a great deal of research has gone into training computer models to infer someone’s emotional state based on their facial expression, that is not the most important aspect of human emotional intelligence, Saxe says. Much more important is the ability to predict someone’s emotional response to events before they occur.

“The most important thing about what it is to understand other people’s emotions is to anticipate what other people will feel before the thing has happened,” she says. “If all of our emotional intelligence was reactive, that would be a catastrophe.”

To try to model how human observers make these predictions, the researchers used scenarios taken from a British game show called “Golden Balls.” On the show, contestants are paired up with a pot of $100,000 at stake. After negotiating with their partner, each contestant decides, secretly, whether to split the pool or try to steal it. If both decide to split, they each receive $50,000. If one splits and one steals, the stealer gets the entire pot. If both try to steal, no one gets anything.

Depending on the outcome, contestants may experience a range of emotions — joy and relief if both contestants split, surprise and fury if one’s opponent steals the pot, and perhaps guilt mingled with excitement if one successfully steals.

To create a computational model that can predict these emotions, the researchers designed three separate modules. The first module is trained to infer a person’s preferences and beliefs based on their action, through a process called inverse planning.

“This is an idea that says if you see just a little bit of somebody’s behavior, you can probabilistically infer things about what they wanted and expected in that situation,” Saxe says.

Using this approach, the first module can predict contestants’ motivations based on their actions in the game. For example, if someone decides to split in an attempt to share the pot, it can be inferred that they also expected the other person to split. If someone decides to steal, they may have expected the other person to steal, and didn’t want to be cheated. Or, they may have expected the other person to split and decided to try to take advantage of them.

The model can also integrate knowledge about specific players, such as the contestant’s occupation, to help it infer the players’ most likely motivation.

The second module compares the outcome of the game with what each player wanted and expected to happen. Then, a third module predicts what emotions the contestants may be feeling, based on the outcome and what was known about their expectations. This third module was trained to predict emotions based on predictions from human observers about how contestants would feel after a particular outcome. The authors emphasize that this is a model of human social intelligence, designed to mimic how observers causally reason about each other’s emotions, not a model of how people actually feel.

“From the data, the model learns that what it means, for example, to feel a lot of joy in this situation, is to get what you wanted, to do it by being fair, and to do it without taking advantage,” Saxe says.

Core intuitions

Once the three modules were up and running, the researchers used them on a new dataset from the game show to determine how the models’ emotion predictions compared with the predictions made by human observers. This model performed much better at that task than any previous model of emotion prediction.

The model’s success stems from its incorporation of key factors that the human brain also uses when predicting how someone else will react to a given situation, Saxe says. Those include computations of how a person will evaluate and emotionally react to a situation, based on their desires and expectations, which relate to not only material gain but also how they are viewed by others.

“Our model has those core intuitions, that the mental states underlying emotion are about what you wanted, what you expected, what happened, and who saw. And what people want is not just stuff. They don’t just want money; they want to be fair, but also not to be the sucker, not to be cheated,” she says.

“The researchers have helped build a deeper understanding of how emotions contribute to determining our actions; and then, by flipping their model around, they explain how we can use people’s actions to infer their underlying emotions. This line of work helps us see emotions not just as ‘feelings’ but as playing a crucial, and subtle, role in human social behavior,” says Nick Chater, a professor of behavioral science at the University of Warwick, who was not involved in the study.

In future work, the researchers hope to adapt the model so that it can perform more general predictions based on situations other than the game-show scenario used in this study. They are also working on creating models that can predict what happened in the game based solely on the expression on the faces of the contestants after the results were announced.

The research was funded by the McGovern Institute; the Paul E. and Lilah Newton Brain Science Award; the Center for Brains, Minds, and Machines; the MIT-IBM Watson AI Lab; and the Multidisciplinary University Research Initiative.