Language processing beyond the neocortex

The cerebellum, highlighted in red. Image: Anatomography maintained by Life Science Databases(LSDB).

The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.

Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT’s Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported January 21, 2026, in the journal Neuron.

“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”

Imaging the language network

There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved or tease out their roles in language processing.

To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.

Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.

Satellite language network

While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.

Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex—a function that could be important for many cognitive tasks.

“We’ve found that language is distinct from many, many other things—but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”

The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.

Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.

The researchers are also exploring the possibility that the cerebellum is particularly important for language learning—playing an outsized role during development or when people learn languages later in life.

Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says. Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.

Unpacking social intelligence

Experience is a powerful teacher—and not every experience has to be our own to help us understand the world. What happens to others is instructive, too. That’s true for humans as well as for other social animals. New research from scientists at the McGovern Institute shows what happens in the brains of monkeys as they integrate their observations of others with knowledge gleaned from their own experience.

“The study shows how you use observation to update your assumptions about the world,” explains McGovern Institute Investigator Mehrdad Jazayeri, who led the research. His team’s findings, published in the January 7 issue of the journal Nature, also help explain why we tend to weigh information gleaned from observation and direct experience differently when we make decisions. Jazayeri is also a professor of brain and cognitive sciences at MIT and an investigator at the Howard Hughes Medical Institute.

“As humans, we do a large part of our learning through observing other people’s experiences and what they go through and what decisions they make,” says Setayesh Radkani, a graduate student in Jazayeri’s lab. For example, she says, if you get sick after eating out, you might wonder if the food at the restaurant was to blame. As you consider whether it’s safe to return, you’ll likely take into account whether the friends you’d dined with got sick too. Your experiences as well as those of your friends will inform your understanding of what happened.

The research team wanted to know how this works: When we make decisions that draw on both direct experience and observation, how does the brain combine the two kinds of evidence? Are the two kinds of information handled differently?

Social experiment

It is hard to tease out the factors that influence social learning. “When you’re trying to compare experiential learning versus observational learning, there are a ton of things that can be different,” Radkani says. For example, people may draw different conclusions about someone else’s experiences than their own, because they know less about that person’s motivations and beliefs. Factors like social status, individual differences, and emotional states can further complicate these situations and be hard to control for, even in a lab.

To create a carefully controlled scenario in which they could focus on how observation changes our understanding of the world, Radkani and postdoctoral fellow Michael Yoo devised a computer game that would allow two players to learn from one another through their experiences. They taught this game to both humans and monkeys.

Their approach, Jazayeri says, goes far beyond the kinds of tasks that are typically studied in a neuroscience lab. “I think it might be one of the most sophisticated tasks monkeys have been trained to perform in a lab,” he says.

Both monkeys and humans played the game in pairs. The object was to collect enough tokens to earn a reward. Players could choose to enter either of two virtual arenas to play—but in one of the two arenas, tokens had no value. In that arena, no matter how many tokens a player collected, they could not win. Players were not told which arena was which, and the winnable and unwinnable arenas sometimes swapped without warning.

Only one individual played at a time, but regardless of who was playing, both individuals watched all of the games. So as either player collected tokens and either did or did not receive a reward, both the player and the observer got the same information. They could use that information to decide which arena to choose in their next round.

Experience outweighs observation

Humans and monkeys have sophisticated social intelligence and both clearly took their partners’ experiences into account as they played the game. But the researchers found that the outcomes of a player’s own games had a stronger influence on each individual’s choice of arena than the outcomes of their partner’s games. “They seem to learn less efficiently from observation, suggesting they tend to devalue the observational evidence,” Radkani says. That distinction was reflected in the patterns of neural activity that the team detected in the brains of the monkeys.

Postdoctoral fellow Ruidong Chen and research assistant Neelima Valluru recorded signals from a part of the brain’s frontal lobe called the anterior cingulate cortex (ACC) as the monkeys played the game. The ACC is known to be involved in social processing. It also integrates information gained through multiple experiences, and seems to use this to update an animal’s beliefs about the world. Prior to the Jazayeri lab’s experiments, this integrative function had only been linked to animals’ direct experiences—not their observations of others.

Consistent with earlier studies, neurons in the ACC changed their activity patterns both when the monkeys played the game and when they watched their partner take a turn. But these signals were complex and variable, making it hard to discern the underlying logic. To tackle this challenge, Chen recorded neural activity from large groups of neurons in both animals across dozens of experiments. “We also had to devise new analysis methods to crack the code and tease out the logic of the computation,” Chen says.

One of the researchers’ central questions was how information about self and other makes its way to the ACC. The team reasoned that there were two possibilities: either the ACC receives a single input on each trial specifying who is acting, or it receives separate input streams for self and other. To test these alternatives, they built artificial neural network models organized both ways and analyzed how well each model matched their neural data. The results suggested that the ACC receives two distinct inputs, one reflecting evidence acquired through direct experience and one reflecting evidence acquired through observation.

The team also found a tantalizing clue about why the brain tends to trust firsthand experiences more than observations. Their analysis showed that the integration process in the ACC was biased toward direct experience. As a result, both humans and monkeys cared more about their own experiences than the experiences of their partner.

Jazayeri says the study paves the way to deeper investigations of how the brain drives social behavior. Now that his team has examined one of the most fundamental features of social learning, they plan to add additional nuance to their studies, potentially exploring how different abilities or the social relationships between animals influence learning.

“Under the broad umbrella of social cognition, this is like step zero,” he says. “But it’s a really important step, because it begins to provide a basis for understanding how the brain represents and uses social information in shaping the mind.”

This research was supported in part by the Yang Tan Collective at MIT.

When it comes to language, context matters

In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.

Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.

Portrait of McGovern Investigator Evelina Fedorenko in a black shirt with soft white lights in background. Photo: Alexandra Sokhina
McGovern Investigator Evelina Fedorenko. Photo: Alexandra Sokhina

“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.

One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.

Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.

The importance of context

Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.

“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.

As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.

“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”

About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.

One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.

This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.

To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.

The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.

“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.

Components of pragmatic ability

The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.

With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.

In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.

This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.

“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.

The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation.

Identifying kids who need help learning to read isn’t as easy as A, B, C

In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.

The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.

When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.

“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.

Boosting literacy

Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)

In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.

These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.

“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”

In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.

The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.

One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.

“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”

Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.

Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.

Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.

“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”

Unrealized potential

Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.

“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.

In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.

“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”

The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.

In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.

The research was funded by Schmidt Futures, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.

MIT cognitive scientists reveal why some sentences stand out from others

Press Mentions

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

Greta Tuckute, a former graduate student in the Fedorenko lab. Photo: Caitlin Cunningham

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

Musicians’ enhanced attention

In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute who used brain imaging to follow what happens when people try to focus their attention on certain sounds.

When Cassia Low Manting, a postdoctoral researcher working in the labs of McGovern Institute Investigators John Gabrieli and Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions. “This points to the idea that we can train this selective attention ability,” Manting says.

The research team, including senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their findings September 17, 2025, in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.

Overcoming challenges

Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”

Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.

Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower pitch sound and the 43-Hertz activity corresponds specifically to the higher pitch sound,” Manting explains. “It is very clean and very clear.”

When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher pitched or the lower pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.

Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.

To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune—even, in some cases, when the notes of the distracting tune played at the exact same time.

Top-down vs bottom-up attention

What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus—the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention—but more so in some people than in others.

“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.

Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.

She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.

New gift expands mental illness studies at Poitras Center for Psychiatric Disorders Research

One in every eight people—970 million globally—live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.

Jim and Pat Poitras
James and Patricia Poitras at an event co-hosted by the McGovern Institute and Autism Speaks. Photo: Justin Knight

Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.

“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap—giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute Board.

Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.

“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”

A legacy of support

Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.

“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia.

The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute, McLean Hospital, Mass General Brigham and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.

A scientist speaks at a podium with an image of DNA on the wall behind him.
Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, delivers a lecture at the Poitras Center’s 10th anniversary celebration in 2017. Photo: Justin Knight

For the past decade, the Poitrases have also fueled breakthroughs in McGovern Investigator Feng Zhang’s lab, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.

In addition to fueling research in the center, the Poitras family has gifted two endowed professorships—the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng—and an annual postdoctoral fellowship at the McGovern Institute.

New initiatives at the Poitras Center

The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.

McGovern cognitive neuroscientists Evelina Fedorenko PhD ‘07 and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience—in collaboration with psychiatrist Ann Shinn of McLean Hospital—will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.

Three women wearing name tags smile for hte camera.
Patricia Poitras (center) with McGovern Investigators Nancy Kanwisher ’80, PhD ’86 (left) and Martha Constantine-Paton (right) at the Poitras Center’s 10th anniversary celebration in 2017. Photo: Justin Knight

A complementary line of investigation will focus on the role of inner speech—the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.

A different project led by McGovern neuroscientist Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine—an increasingly used antidepressant—alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly—and inform the development of safer, longer-lasting antidepressants.

Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.

“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”

Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain—essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.

Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.

“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and most of all, give families living with these conditions a renewed sense of hope for the future.”

Learning from punishment

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent—but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported August 4 in the journal PNAS, explains how a single punishment can send different messages to different people and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

Modeling punishment

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern Investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts—everybody knows what action happened, who punished it, and what they did to punish it—different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations—but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Woman in red sweater smiling to camera
Graduate student Setayesh Radkani uses tools from psychology, cognitive neuroscience and machine learning to understand the social and moral mind. Photo: Caitlin Cunningham

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes—assessed through a standard survey—tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. “You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

This study was funded, in part, by the Patrick J McGovern Foundation.

How the brain distinguishes oozing fluids from solid objects

Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

Portrait of smiling woman wearing a grey sweater.
McGovern Institute postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison in the fall of 2025, is the lead author of the “things vs. stuff” paper, which appears today in the journal Current Biology. Photo: Steph Stevens

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.

 

Adolescents’ willingness to explore is shaped by socioeconomic status

Exploration is essential to learning—and a new study from scientists at MIT’s McGovern Institute suggests that students may be less willing to explore if they come from a low socioeconomic environment. The study, which focused on adolescents and was published July 9, 2025, in the journal Nature Communications, shows how differences in learning strategies might contribute to socioeconomic-related disparities in academic achievement.

Students with low socioeconomic status (SES)—a measure that takes into account parents’ income levels and educational attainment—tend to lag behind their higher-SES peers academically. Limited resources at home can restrict access to educational tools and experiences, likely contributing to these disparities. But the new study, led by McGovern Institute Investigator John Gabrieli, shows that students from low-SES backgrounds may approach learning differently, too.

“We often think about external factors when we think about socioeconomic differences in learning, but kids’ mindsets and internal factors can also play a role,” says Alexandra Decker, a postdoctoral fellow in Gabrieli’s lab who ran the study. Understanding such differences can help educators develop strategies to reduce disparities and help all students succeed.

The value of exploration

Exploration is a vital part of development, particularly during adolescence. By trying new things and testing limits, children begin to find their way in the world, discovering the subjects and experiences that motivate them. That’s important for obtaining new knowledge, both in and out of school. “There’s a lot of research suggesting that exploration is a really important mechanism that children use for learning,” Decker says. “Exploring their environment really broadly and making mistakes helps them get the feedback that they need for learning,” she says.

Because the outcomes of exploration are unknown, this way of interacting with the world involves risk. “If you try something new, the outcome is uncertain, and it could lead to a bad outcome before things get better. You might lose out, at least in the short term. ” Decker says.

At school, students can explore in a variety of ways, such as by asking questions in class or taking on courses in unfamiliar subjects. Both are opportunities to learn something new, though they may seem less safe than sitting quietly and sticking to more comfortable coursework. Decker points out that this kind of exploration might feel particularly risky when students feel they lack the resources to compensate if things don’t go well.

“If you’re in an environment that’s really enriching, you have resources to compensate for challenges that might be accrued through exploring. If you take a new course and you struggle, you can use your resources to get a tutor and overcome these challenges. Your environment can support exploration and its costs,” she says. “But if you’re in an environment where you don’t have resources to compensate for bad outcomes, you might not take that course that could lead to unknown outcomes.”

Risk-benefit analysis

To investigate the relationship between SES and exploration, Gabrieli’s team had students play a computer game in which they earned points for pumping up balloons as much as possible without popping them. The most successful strategy was to explore the limits early on by pumping the first balloons until they popped, thereby learning when to stop with future balloons. A less exploratory approach could keep all the balloons intact, but earn fewer points over the course of the game.

The students who participated in the study were between the ages of 12 and 14 and came from families with a wide range of SES. Those from lower-SES backgrounds were less likely to explore in the balloon pumping task, resulting in lower outcomes in the game. What’s more, the researchers found a relationship between students’ exploration in the game and their real-world academic performance. Those who explored the least in the balloon-popping game had lower grades than students who explored more. For students at lower-SES levels, reduced exploration also correlated to lower scores on standardized tests of academic skills.

The researchers took a closer look at the data to investigate why some students explored more than others in their game. Their analysis indicated that students who were reluctant to explore were more strongly motivated by avoiding losses than students who had pushed the limits as they pumped their balloons.

The finding suggests that potential losses might be particularly distressing to lower-SES students, says Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. Decker adds students from less affluent backgrounds may have found losses to be more consequential than they are for students whose families have more resources, so it makes sense that those students might take greater pains to avoid them.

This is not the first time Gabrieli’s group has found that evidence of differences in the ways students from different socioeconomic backgrounds make decisions. In a brain imaging study published last year, they found that the brains of adolescents from low-SES backgrounds respond less to rewards than the brains of their higher-SES peers. “How you think about the world—in terms of what’s rewarding, risks worth taking or not taking—seems strongly influenced by the environment that you’re growing up in,” he says.

Decker notes that regardless of SES, students in the study were generally more willing to explore when they had experienced more recent successes in the task. This finding, along with what the team learned about how loss aversion curtails exploration, suggest strategies that educators might use to encourage more exploration in the classroom. “Low-stakes opportunities for kids to engage in exploratory risk-taking with positive feedback could go a long way to helping kids feel more comfortable exploring,” Decker says.