3 Questions: Claire Wang on training the brain for memory sports

On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.

The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.

The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.

MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.

Q: How did you come to get involved in memory competitions?

A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.

Q: What was the Tournament of Memory Champions like?

A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.

Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.

Since I hadn’t done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.

Q: What is your approach to improving memory?

A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.

Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.

The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.

Four from MIT named 2025 Rhodes Scholars

Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Ouigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world’s major challenges,” says Professor Nancy Kanwisher who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.

Yiming Chen ’24

Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.

Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.

Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.

Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.

Wilhem Hector

Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.

Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.

Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.

Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.

Anushka Nair

Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.

For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.

Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.

Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.

David Oluigbo

David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MSc in applied digital health followed by an MSc in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.

Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.

In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.

Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.

Illuminating the architecture of the mind

This story also appears in the Winter 2025 issue of BrainScan

___

McGovern investigator Nancy Kanwisher and her team have big questions about the nature of the human mind. Energized by Kanwisher’s enthusiasm for finding out how and why the brain works as it does, her team collaborates broadly and embraces various tools of neuroscience. But their core discoveries tend to emerge from pictures of the brain in action. For Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, “there’s nothing like looking inside.”

Kanwisher and her colleagues have scanned the brains of hundreds of volunteers using functional magnetic resonance imaging (fMRI). With each scan, they collect a piece of insight into how the brain is organized.

Male and female researchers sitting in an imaging center with an MRI in the background.
Nancy Kanwisher (right), whose unfaltering support for students and trainees has earned her awards for outstanding teaching and mentorship, is now working with research scientist RT Pramod to find the brain’s “physics network.” Photo: Steph Stevens

Recognizing faces

By visualizing the parts of the brain that get involved in various mental activities — and, importantly, which do not — they’ve discovered that certain parts of the brain specialize in surprisingly specific tasks. Earlier this year Kanwisher was awarded the prestigious Kavli Prize in Neuroscience for the discovery of one of these hyper-specific regions: a small spot within the brain’s neocortex that recognizes faces.

Kanwisher found that this region, which she named the fusiform face area (FFA), is highly sensitive to images of faces and appears to be largely uninterested in other objects. Without the FFA, the brain struggles with facial recognition — an impairment seen in patients who have experienced damage to this part of the brain.

Beyond the FFA

Not everything in the brain is so specialized. Many areas participate in a range of cognitive processes, and even the most specialized modules, like the FFA, must work with other brain regions to process and use information. Plus, Kanwisher and her team have tracked brain activity during many functions without finding regions devoted exclusively to those tasks. (There doesn’t appear to be a part of the brain dedicated to recognizing snakes, for example).

Still, work in the Kanwisher lab demonstrates that as a specialized functional module within the brain, the FFA is not unique. In collaboration with McGovern colleagues Josh McDermott and Evelina Fedorenko, the group has found areas devoted to perceiving music and using language. There’s even a region dedicated to thinking about other people’s thoughts, identified by Rebecca Saxe in work she started as a graduate student in Kanwisher’s lab.

Brain with colored blobs.
Kanwisher’s team has found several hyperspecific regions of the brain, including those dedicated to using language (red-orange), perceiving music (yellow), thinking about other people’s thoughts (blue), recognizing bodies (green), and our intuitive sense of physics (teal). (This is an artistic adaptation of Kanwisher’s data.)

Having established these regions’ roles, Kanwisher and her collaborators are now looking at how and why they become so specialized. Meanwhile, the group has also turned its attention to a more complex function that seems to largely take place within a defined network: our intuitive sense of physics.

The brain’s game engine

Early in life, we begin to understand the nature of objects and materials, such as the fact that objects can support but not move through each other. Later, we intuitively understand how it feels to move on a slippery floor, what happens when moving objects collide, and where a tossed ball will fall. “You can’t do anything at all in the world without some understanding of the physics of the world you’re acting on,” Kanwisher says.

Kanwisher says MIT colleague Josh Tenenbaum first sparked her interest in intuitive physical reasoning. Tenenbaum and his students had been arguing that humans understand
the physical world using a simulation system, much like the physics engines that video games use to generate realistic movement and interactions within virtual environments. Kanwisher decided to team up with Tenenbaum to test whether there really is a game engine in the head, and if so, what it computes and represents.

An unstable column of blue and yellow blocks piled on top of a table that is half red, half green.
By asking subjects in an MRI scanner to predict which way this block tower might fall, Kanwisher’s team is zeroing in on the location of the brain’s “physics network.” Image: RT Pramod, Nancy Kanwisher

To find out, Kanwisher and her team have asked volunteers to evaluate various scenarios while in an MRI scanner — some that require physical reasoning and some that do not. They found sizable parts of the brain that participate in physical reasoning tasks but stay quiet during other kinds of thinking.

Research scientist RT Pramod says he was initially skeptical the brain would dedicate special circuitry to the diverse tasks involved in our intuitive sense of physics — but he’s been convinced by the data he’s found. “I see consistent evidence that if you’re reasoning, if you’re thinking, or even if you’re looking at anything sort of “physics-y” about the world, you will see activations in these regions and only in these regions — not anywhere else,” he says.

Pramod’s experiments also show that these regions are called on to make predictions about the physical world. When volunteers watch videos of objects whose trajectories portend a crash — but do not actually depict that crash — it is the physics network that signals what is about to happen. “Only these regions have this information, suggesting that maybe there is some truth to the physics engine hypothesis,” Pramod says.

Kanwisher says she doesn’t expect physical reasoning, which her group has tied to sizable swaths of the brain’s frontal and parietal cortex, to be executed by a module as distinct as the FFA. “It’s not going to be like one hyper-specific region and that’s all that happens there,” she says. “I think ultimately it’s much more interesting than that.”

To figure out what these regions can and cannot do, Kanwisher’s team has broadened the ways in which they ask volunteers to think about physics inside the MRI scanner. So far, Kanwisher says, the group’s tests have focused on rigid objects. But what about soft, squishy ones, or liquids?

A red liquid sloshes inside a clear container.
Kanwisher’s team is exploring whether non-rigid materials, like the liquid in this image, engage the brain’s “physics network” in the same way as rigid objects. Image: Vivian Paulun

Vivian Paulun, a postdoc working jointly with Kanwisher and Tenenbaum, is investigating whether our innate expectations about these kinds of materials occur within the network that they have linked to physical reasoning about rigid objects. Another set of experiments will explore whether we use sounds, like that of a bouncing ball or a screeching car, to predict physics physical events with the same network that interprets visual cues.

Meanwhile, she is also excited about an opportunity to find out what happens when the brain’s physics network is damaged. With collaborators in England, the group plans to find out whether patients in which stroke has affected this part of the brain have specific deficits in physical reasoning.

Probing these questions could reveal fundamental truths about the human mind and intelligence. Pramod points out that it could also help advance artificial intelligence, which so far has been unable to match humans when it comes to physical reasoning. “Inferences that are sort of easy for us are still really difficult for even state-of-the art computer vision,” he says. “If we want to get to a stage where we have really good machine learning algorithms that can interact with the world the way we do, I think we should first understand how the brain does it.”

Model reveals why debunking election misinformation often doesn’t work

When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

A new strategy to cope with emotional stress

Some people, especially those in public service, perform admirable feats—healthcare workers fighting to keep patients alive or a first responder arriving at the scene of a car crash. But the emotional weight can become a mental burden. Research has shown that emergency personnel are at elevated risk for mental health challenges like post-traumatic stress disorder. How can people undergo such stressful experiences and also maintain their well-being?

A new study from the McGovern Institute reveals that a cognitive strategy focused on social good may be effective in helping people cope with distressing events. The research team found that the approach was comparable to another well-established emotion regulation strategy, unlocking a new tool for dealing with highly adverse situations.

“How you think can improve how you feel.”
– John Gabrieli

“This research suggests that the social good approach might be particularly useful in improving well-being for those constantly exposed to emotionally taxing events,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, who is a senior author of the paper.

The study, published today in PLOS ONE, is the first to examine the efficacy of this cognitive strategy. Nancy Tsai, a postdoctoral research scientist in Gabrieli’s lab at the McGovern Institute, is the lead author of the paper.

Emotion regulation tools

Emotion regulation is the ability to mentally reframe how we experience emotions—a skill critical to maintaining good mental health. Doing so can make one feel better when dealing with adverse events, and emotion regulation has been shown to boost emotional, social, cognitive, and physiological outcomes across the lifespan.

Female scientist poses with her arms crossed.
MIT postdoctoral researcher Nancy Tsai. Photo: Steph Stevens

One emotion regulation strategy is “distancing,” where a person copes with a negative event by imagining it as happening far away, a long time ago, or from a third-person perspective. Distancing has been well-documented as a useful cognitive tool, but it may be less effective in certain situations, especially ones that are socially charged—like a firefighter rescuing a family from a burning home. Rather than distancing themselves, a person may instead be forced to engage directly with the situation.

“In these cases, the ‘social good’ approach may be a powerful alternative,” says Tsai. “When a person uses the social good method, they view a negative situation as an opportunity to help others or prevent further harm.” For example, a firefighter experiencing emotional distress might focus on the fact that their work enables them to save lives. The idea had yet to be backed by scientific investigation, so Tsai and her team, alongside Gabrieli, saw an opportunity to rigorously probe this strategy.

A novel study

The MIT researchers recruited a cohort of adults and had them complete a questionnaire to gather information including demographics, personality traits, and current well-being, as well as how they regulated their emotions and dealt with stress. The cohort was randomly split into two groups: a distancing group and a social good group. In the online study, each group was shown a series of images that were either neutral (such as fruit) or contained highly aversive content (such as bodily injury). Participants were fully informed of the types of images they might see and could opt out of the study at any time.

Each group was asked to use their assigned cognitive strategy to respond to half of the negative images. For example, while looking at a distressing image, a person in the distancing group could have imagined that it was a screenshot from a movie. Conversely, a subject in the social good group might have responded to the image by envisioning that they were a first responder saving people from harm. For the other half of the negative images, participants were asked to only look at them and pay close attention to their emotions. The researchers asked the participants how they felt after each image was shown.

Social good as a potent strategy

The MIT team found that distancing and social good approaches helped diminish negative emotions. Participants reported feeling better when they used these strategies after viewing adverse content compared to when they did not and stated that both strategies were easy to implement.

The results also revealed that, overall, distancing yielded a stronger effect. Importantly, however, Tsai and Gabrieli believe that this study offers compelling evidence for social good as a powerful method better suited to situations when people cannot distance themselves, like rescuing someone from a car crash, “Which is more probable for people in the real world,” notes Tsai. Moreover, the team discovered that people who most successfully used the social good approach were more likely to view stress as enhancing rather than debilitating. Tsai says this link may point to psychological mechanisms that underlie both emotion regulation and how people respond to stress.

“The social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”
– John Gabrieli

Additionally, the results showed that older adults used the cognitive strategies more effectively than younger adults. The team suspects that this is probably because, as prior research has shown, older adults are more adept at regulating their emotions likely due to having greater life experiences. The authors note that successful emotion regulation also requires cognitive flexibility, or having a malleable mindset to adapt well to different situations.

“This is not to say that people, such as physicians, should reframe their emotions to the point where they fully detach themselves from negative situations,” says Gabrieli. “But our study shows that the social good approach may be a potent strategy to combat the immense emotional demands of certain professions.”

The MIT team says that future studies are needed to further validate this work, and that such research is promising in that it can uncover new cognitive tools to equip individuals to take care of themselves as they bravely assume the challenge of taking care of others.

What is language for?

Press Mentions

Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?

In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.

Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.

 “I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.

“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”

Separating language and thought

For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically.

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.

“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”

Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.

“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko

“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”

That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”

Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.

Language optimization

In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.

That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.

“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.

“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.

Nancy Kanwisher Shares 2024 Kavli Prize in Neuroscience

The Norwegian Academy of Science and Letters today announced the 2024 Kavli Prize Laureates in the fields of astrophysics, nanoscience, and neuroscience. The 2024 Kavli Prize in Neuroscience honors Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and an investigator at the McGovern Institute, along with UC Berkeley neurobiologist Doris Tsao, and Rockefeller University neuroscientist Winrich Freiwald for their discovery of a highly localized and specialized system for representation of faces in human and non-human primate neocortex. The neuroscience laureates will share $1 million USD.

“Kanwisher, Freiwald, and Tsao together discovered a localized and specialized neocortical system for face recognition,” says Kristine Walhovd, Chair of the Kavli Neuroscience Committee. “Their outstanding research will ultimately further our understanding of recognition not only of faces, but objects and scenes.”

Overcoming failure

As a graduate student at MIT in the early days of functional brain imaging, Kanwisher was fascinated by the potential of the emerging technology to answer a suite of questions about the human mind. But a lack of brain imaging resources and a series of failed experiments led Kanwisher consider leaving the field for good. She credits her advisor, MIT Professor of Psychology Molly Potter, for supporting her through this challenging time and for teaching her how to make powerful inferences about the inner workings of the mind from behavioral data alone.

After receiving her PhD from MIT, Kanwisher spent a year studying nuclear strategy with a MacArthur Foundation Fellowship in Peace and International Security, but eventually returned to science by accepting a faculty position at Harvard University where she could use the latest brain imaging technology to pursue the scientific questions that had always fascinated her.

Zeroing in on faces

Recognizing faces is important for social interaction in many animals. Previous work in human psychology and animal research had suggested the existence of a functionally specialized system for face recognition, but this system had not clearly been identified with brain imaging technology. It is here that Kanwisher saw her opportunity.

Using a new method at the time, called functional magnetic resonance imaging or fMRI, Kanwisher’s team scanned people while they looked at faces and while they looked at objects, and searched for brain regions that responded more to one than the other. They found a small patch of neocortex, now called the fusiform face area (FFA), that is dedicated specifically to the task of face recognition. She found individual differences in the location of this area and devised an analysis technique to effectively localize specialized functional regions in the brain. This technique is now widely used and applied to domains beyond the face recognition system. Notably, Kanwisher’s first FFA paper was co-authored with Josh McDermott, who was an undergrad at Harvard University at the time, and is now an associate investigator at the McGovern Institute and holds a faculty position alongside Kanwisher in MIT’s Department of Brain and Cognitive Sciences.

A group of five scientists standing and smiling in front of a whiteboard.
The Kanwisher lab at Harvard University circa 1996. From left to right: Nancy Kanwisher, Josh McDermott (then an undergrad), Marvin Chun (postdoc), Ewa Wojciulik (postdoc), and Jody Culham (grad student). Photo: Nancy Kanwisher

From humans to monkeys

Inspired by Kanwisher´s findings, Winrich Freiwald and Doris Tsao together used fMRI to localize similar face patches in macaque monkeys. They mapped out six distinct brain regions, known as the face patch system, including these regions’ functional specialization and how they are connected. By recording the activity of individual brain cells, they revealed how cells in some face patches specialize in faces with particular views.

Tsao proceeded to identify how the face patches work together to identify a face, through a specific code that enables single cells to identify faces by assembling information of facial features. For example, some cells respond to the presence of hair, others to the distance between the eyes. Freiwald uncovered that a separate brain region, called the temporal pole, accelerates our recognition of familiar faces, and that some cells are selectively responsive to familiar faces.

“It was a special thrill for me when Doris and Winrich found face patches in monkeys using fMRI,” says Kanwisher, whose lab at MIT’s McGovern Institute has gone on to uncover many other regions of the human brain that engage in specific aspects of perception and cognition. “They are scientific heroes to me, and it is a thrill to receive the Kavli Prize in neuroscience jointly with them.”

“Nancy and her students have identified neocortical subregions that differentially engage in the perception of faces, places, music and even what others think,” says McGovern Institute Director Robert Desimone. “We are delighted that her groundbreaking work into the functional organization of the human brain is being honored this year with the Kavli Prize.”

Together, the laureates, with their work on neocortical specialization for face recognition, have provided basic principles of neural organization which will further our understanding of how we perceive the world around us.

About the Kavli Prize

The Kavli Prize is a partnership among The Norwegian Academy of Science and Letters, The Norwegian Ministry of Education and Research, and The Kavli Foundation (USA). The Kavli Prize honors scientists for breakthroughs in astrophysics, nanoscience and neuroscience that transform our understanding of the big, the small and the complex. Three one-million-dollar prizes are awarded every other year in each of the three fields. The Norwegian Academy of Science and Letters selects the laureates based on recommendations from three independent prize committees whose members are nominated by The Chinese Academy of Sciences, The French Academy of Sciences, The Max Planck Society of Germany, The U.S. National Academy of Sciences, and The Royal Society, UK.

What is consciousness?

In the hit T.V. show “Westworld,” Dolores Abernathy, a golden-tressed belle, lives in the days when Manifest Destiny still echoed in America. She begins to notice unusual stirrings shaking up her quaint western town—and soon discovers that her skin is synthetic, and her mind, metal. She’s a cyborg meant to entertain humans. The key to her autonomy lies in reaching consciousness.

Shows like “Westworld” and other media probe the idea of consciousness, attempting to nail down a definition of the concept. However, though humans have ruminated on consciousness for centuries, we still don’t have a solid definition (even the Merriam-Webster dictionary lists five). One framework suggests that consciousness is any experience, from eating a candy bar to heartbreak. Another argues that it is how certain stimuli influence one’s behavior.

MIT graduate student Adam Eisen.

While some search for a philosophical explanation, MIT graduate student Adam Eisen seeks a scientific one.

Eisen studies consciousness in the labs of Ila Fiete, an associate investigator at the McGovern Institute, and Earl Miller, an investigator at the Picower Institute for Learning and Memory. His work melds seemingly opposite fields, using mathematical models to quantitatively explain, and thereby ground, the loftiness of consciousness.

In the Fiete lab, Eisen leverages computational methods to compare the brain’s electrical signals in an awake, conscious state to those in an unconscious state via anesthesia—which dampens communication between neurons so people feel no pain or become unconscious.

“What’s nice about anesthesia is that we have a reliable way of turning off consciousness,” says Eisen.

“So we’re now able to ask: What’s the fluctuation of electrical activity in a conscious versus unconscious brain? By characterizing how these states vary—with the precision enabled by computational models—we can start to build a better intuition for what underlies consciousness.”

Theories of consciousness

How are scientists thinking about consciousness? Eisen says that there are four major theories circulating in the neuroscience sphere. These theories are outlined below.

Global workspace theory

Consider the placement of your tongue in your mouth. This sensory information is always there, but you only notice the sensation when you make the effort to think about it. How does this happen?

“Global workspace theory seeks to explain how information becomes available to our consciousness,” he says. “This is called access consciousness—the kind that stores information in your mind and makes it available for verbal report. In this view, sensory information is broadcasted to higher-level regions of the brain by a process called ignition.” The theory proposes that widespread jolts of neuronal activity or “spiking” are essential for ignition, like how a few claps can lead to an audience applause. It’s through ignition that we reach consciousness.

Eisen’s research in anesthesia suggests, though, that not just any spiking will do. There needs to be a balance: enough activity to spark ignition, but also enough stability such that the brain doesn’t lose its ability to respond to inputs and produce reliable computations to reach consciousness.

Higher order theories

Let’s say you’re listening to “Here Comes The Sun” by The Beatles. Your brain processes the medley of auditory stimuli; you hear the bouncy guitar, upbeat drums, and George Harrison’s perky vocals. You’re having a musical experience—what it’s like to listen to music. According to higher-order theories, such an experience unlocks consciousness.

“Higher-order theories posit that a conscious mental state involves having higher-order mental representations of stimuli—usually in the higher levels of the brain responsible for cognition—to experience the world,” Eisen says.

Integrated information theory

“Imagine jumping into a lake on a warm summer day. All components of that experience—the feeling of the sun on your skin and the coolness of the water as you submerge—come together to form your ‘phenomenal consciousness,’” Eisen says. If the day was slightly less sunny or the water a fraction warmer, he explains, the experience would be different.

“Integrated information theory suggests that phenomenal consciousness involves an experience that is irreducible, meaning that none of the components of that experience can be separated or altered without changing the experience itself,” he says.

Attention schema theory

Attention schema theory, Eisen explains, says ‘attention’ is the information that we are focused on in the world, while ‘awareness’ is the model we have of our attention. He cites an interesting psychology study to disentangle attention and awareness.

In the study, the researchers showed human subjects a mixed sequence of two numbers and six letters on a computer. The participants were asked to report back what the numbers were. While they were doing this task, faintly detectable dots moved across the screen in the background. The interesting part, Eisen notes, is that people weren’t aware of the dots—that is, they didn’t report that they saw them. But despite saying they didn’t see the dots, people performed worse on the task when the dots were present.

“This suggests that some of the subjects’ attention was allocated towards the dots, limiting their available attention for the actual task,” he says. “In this case, people’s awareness didn’t track their attention. The subjects were not aware of the dots, even though the study shows that the dots did indeed affect their attention.”

The science behind consciousness

Eisen notes that a solid understanding of the neural basis of consciousness has yet to be cemented. However, he and his research team are advancing in this quest. “In our work, we found that brain activity is more ‘unstable’ under anesthesia, meaning that it lacks the ability to recover from disturbances—like distractions or random fluctuations in activity—and regain a normal state,” he says.

He and his fellow researchers believe this is because the unconscious brain can’t reliably engage in computations like the conscious brain does, and sensory information gets lost in the noise. This crucial finding points to how the brain’s stability may be a cornerstone of consciousness.

There’s still more work to do, Eisen says. But eventually, he hopes that this research can help crack the enduring mystery of how consciousness shapes human existence. “There is so much complexity and depth to human experience, emotion, and thought. Through rigorous research, we may one day reveal the machinery that gives us our common humanity.”

For people who speak many languages, there’s something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.