MIT cognitive scientists reveal why some sentences stand out from others

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

Greta Tuckute, a former graduate student in the Fedorenko lab. Photo: Caitlin Cunningham

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

Musicians’ enhanced attention

In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute who used brain imaging to follow what happens when people try to focus their attention on certain sounds.

When Cassia Low Manting, a postdoctoral researcher working in the labs of McGovern Institute Investigators John Gabrieli and Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions. “This points to the idea that we can train this selective attention ability,” Manting says.

The research team, including senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their findings September 17, 2025, in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.

Overcoming challenges

Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”

Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.

Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower pitch sound and the 43-Hertz activity corresponds specifically to the higher pitch sound,” Manting explains. “It is very clean and very clear.”

When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher pitched or the lower pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.

Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.

To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune—even, in some cases, when the notes of the distracting tune played at the exact same time.

Top-down vs bottom-up attention

What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus—the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention—but more so in some people than in others.

“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.

Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.

She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.

International neuroscience collaboration unveils comprehensive cellular-resolution map of brain activity

The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. Researchers from the International Brain Laboratory (IBL), including McGovern Investigator Ila Fiete, published their findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains Co-Founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half a million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95% of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a Group Leader at the University of Geneva.

Brain-wide map showing 75,000 analyzed neurons lighting up during different stages of decision-making. At the beginning of the trial, the activity is quiet. Then it builds up in the visual areas at the back of the brain, followed by a rise in activity spreading across the brain as evidence accumulates towards a decision. Next, motor areas light up as there is movement onset and finally there is a spike in activity everywhere in the brain as the animal is rewarded.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called Neuropixels probes,  for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brainwide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision making.”

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in future.

Flat maps of the mouse brain showing which areas have significant changes in activity during each of three task intervals. Credit: Michael Schartner & International Brain Laboratory

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations, our beliefs about what is likely to happen based on our recent experience, are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: if it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world is shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome (209558 and 216324), the Simons Foundation, The National Institutes of Health (NIH U19NS12371601), the National Science Foundation (NSF 1707398), the Gatsby Charitable Foundation (GAT3708), andby the Max Planck Society and the Humboldt Foundation.

 

Learning from punishment

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent—but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported August 4 in the journal PNAS, explains how a single punishment can send different messages to different people and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

Modeling punishment

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern Investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts—everybody knows what action happened, who punished it, and what they did to punish it—different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations—but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Woman in red sweater smiling to camera
Graduate student Setayesh Radkani uses tools from psychology, cognitive neuroscience and machine learning to understand the social and moral mind. Photo: Caitlin Cunningham

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes—assessed through a standard survey—tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. “You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

This study was funded, in part, by the Patrick J McGovern Foundation.

MIT’s McGovern Institute and Department of Brain and Cognitive Sciences welcome new faculty member Sven Dorkenwald

The McGovern Institute and the Department of Brain and Cognitive Sciences are pleased to announce the appointment of Sven Dorkenwald as an assistant professor starting in January 2026. A trailblazer in the field of computational neuroscience, Dorkenwald is recognized for his leadership in connectomics—an emerging discipline focused on reconstructing and analyzing neural circuitry at unprecedented scale and detail. 

“We are thrilled to welcome Sven to MIT” says McGovern Institute Director Robert Desimone. “He brings visionary science and a collaborative spirit to a rapidly advancing area of brain and cognitive sciences and his appointment strengthens MIT’s position at the forefront of brain research.” 

Dorkenwald’s research is driven by a bold vision: to develop and apply cutting-edge computational methods that reveal how brain circuits are organized and how they give rise to complex computations. His innovative work has led to transformative advances in the reconstruction of connectomes (detailed neural maps) from nanometer-scale electron microscopy images. He has championed open team science and data sharing and played a central role in producing the first connectome of an entire fruit fly brain—a groundbreaking achievement that is reshaping our understanding of sensory processing and brain circuit function. 

Sven is a rising leader in computational neuroscience who has already made significant contributions toward advancing our understanding of the brain,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and Department Head of Brain and Cognitive Sciences. “He brings a combination of technical expertise, a collaborative mindset, and a strong commitment to open science that will be invaluable to our department. I’m pleased to welcome him to our community and look forward to the impact he will have.” 

Dorkenwald earned his BS in physics in 2014 and MS in computer engineering in 2017 from the University of Heidelberg, Germany. He began his research in connectomics as an undergraduate in the group of Winfried Denk at the Max Planck Institute for Medical Research and Max Planck Institute of Neurobiology.  Dorkenwald went on to complete his PhD at Princeton University in 2023, where he studied both computer science and neuroscience under the mentorship of Sebastian Seung and Mala Murthy. 

All 139,255 neurons in the brain of an adult fruit fly reconstructed by the FlyWire Consortium, with each neuron uniquely color-coded. Render by Tyler Sloan. Image: Sven Dorkenwald

As a PhD student at Princeton, Dorkenwald spearheaded the FlyWire Consortium, a group of more than 200 scientists, gamers, and proofreaders who combined their skills to create the fruit fly connectome. More than 20 million scientific images of the adult fruit fly brain  were added to an AI model that traced each neuron and synapse in exquisite detail. Members of the consortium then checked the results produced by the AI model and pieced them together into a complete, three-dimensional map. With over 140,000 neurons, it is the most complex brain completely mapped to date. The findings were published in a special issue of Nature in 2024. 

Dorkenwald’s work also played a key role in the MICrONS’ consortium effort to reconstruct a cubic millimeter connectome of the mouse visual cortex. Within the MICrONS effort, he co-lead the development of the software infrastructure, CAVE, that enables scientists to collaboratively edit and analyze large connectomics datasets, including FlyWire’s. The findings of the MICrONS consortium were published in a special issue of Nature in 2025. 

Dorkenwald is currently a Shanahan Fellow at the Allen Institute and the University of Washington. He also serves as a visiting faculty researcher at Google Research, where he has been developing machine learning approaches for the annotation of cell reconstructions as part of the Neuromancer team led by Viren Jain.  

As an investigator at the McGovern Institute and an assistant professor in the department of brain and cognitive sciences at MIT, Dorkenwald  plans to develop computational approaches to overcome challenges in scaling connectomics to whole mammalian brains with the goal of advancing our mechanistic understanding of neuronal circuits and analyzing how they compare across regions and species. 

 

Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event

Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.

The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.

Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.

“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”

Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.

Presentation highlights include:

“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.

“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.

“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.

Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”

“This is only the beginning,” he continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”

How the brain solves complicated problems

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.

Rational strategies

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.

“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.

“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”

Human limitations

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

A visual pathway in the brain may do more than recognize objects

When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.

However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.

“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.

Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.

Beyond object recognition

When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.

Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.

The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.

However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.

“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.

To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.

The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.

The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.

“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”

Comparing models

The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.

“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”

The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.

“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”

In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.

“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.

The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.

Looking under the hood at the brain’s language system

As a young girl growing up in the former Soviet Union, Evelina Fedorenko PhD ’07 studied several languages, including English, as her mother hoped that it would give her the chance to eventually move abroad for better opportunities.

Her language studies not only helped her establish a new life in the United States as an adult, but also led to a lifelong interest in linguistics and how the brain processes language. Now an associate professor of brain and cognitive sciences at MIT, Fedorenko studies the brain’s language-processing regions: how they arise, whether they are shared with other mental functions, and how each region contributes to language comprehension and production.

Fedorenko’s early work helped to identify the precise locations of the brain’s language-processing regions, and she has been building on that work to generate insight into how different neuronal populations in those regions implement linguistic computations.

“It took a while to develop the approach and figure out how to quickly and reliably find these regions in individual brains, given this standard problem of the brain being a little different across people,” she says. “Then we just kept going, asking questions like: Does language overlap with other functions that are similar to it? How is the system organized internally? Do different parts of this network do different things? There are dozens and dozens of questions you can ask, and many directions that we have pushed on.”

Among some of the more recent directions, she is exploring how the brain’s language-processing regions develop early in life, through studies of very young children, people with unusual brain architecture, and computational models known as large language models.

From Russia to MIT

Fedorenko grew up in the Russian city of Volgograd, which was then part of the Soviet Union. When the Soviet Union broke up in 1991, her mother, a mechanical engineer, lost her job, and the family struggled to make ends meet.

“It was a really intense and painful time,” Fedorenko recalls. “But one thing that was always very stable for me is that I always had a lot of love, from my parents, my grandparents, and my aunt and uncle. That was really important and gave me the confidence that if I worked hard and had a goal, that I could achieve whatever I dreamed about.”

Fedorenko did work hard in school, studying English, French, German, Polish, and Spanish, and she also participated in math competitions. As a 15-year-old, she spent a year attending high school in Alabama, as part of a program that placed students from the former Soviet Union with American families. She had been thinking about applying to universities in Europe but changed her plans when she realized the American higher education system offered more academic flexibility.

After being admitted to Harvard University with a full scholarship, she returned to the United States in 1998 and earned her bachelor’s degree in psychology and linguistics, while also working multiple jobs to send money home to help her family.

While at Harvard, she also took classes at MIT and ended up deciding to apply to the Institute for graduate school. For her PhD research at MIT, she worked with Ted Gibson, a professor of brain and cognitive sciences, and later, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience. She began by using functional magnetic resonance imaging (fMRI) to study brain regions that appeared to respond preferentially to music, but she soon switched to studying brain responses to language.

She found that working with Kanwisher, who studies the functional organization of the human brain but hadn’t worked much on language before, helped Fedorenko to build a research program free of potential biases baked into some of the early work on language processing in the brain.

“We really kind of started from scratch,” Fedorenko says, “combining the knowledge of language processing I have gained by working with Gibson and the rigorous neuroscience approaches that Kanwisher had developed when studying the visual system.”

After finishing her PhD in 2007, Fedorenko stayed at MIT for a few years as a postdoc funded by the National Institutes of Health, continuing her research with Kanwisher. During that time, she and Kanwisher developed techniques to identify language-processing regions in different people, and discovered new evidence that certain parts of the brain respond selectively to language. Fedorenko then spent five years as a research faculty member at Massachusetts General Hospital, before receiving an offer to join the faculty at MIT in 2019.

How the brain processes language

Since starting her lab at MIT’s McGovern Institute for Brain Research, Fedorenko and her trainees have made several discoveries that have helped to refine neuroscientists’ understanding of the brain’s language-processing regions, which are spread across the left frontal and temporal lobes of the brain.

In a series of studies, her lab showed that these regions are highly selective for language and are not engaged by activities such as listening to music, reading computer code, or interpreting facial expressions, all of which have been argued to be share similarities with language processing.

“We’ve separated the language-processing machinery from various other systems, including the system for general fluid thinking, and the systems for social perception and reasoning, which support the processing of communicative signals, like facial expressions and gestures, and reasoning about others’ beliefs and desires,” Fedorenko says. “So that was a significant finding, that this system really is its own thing.”

More recently, Fedorenko has turned her attention to figuring out, in more detail, the functions of different parts of the language processing network. In one recent study, she identified distinct neuronal populations within these regions that appear to have different temporal windows for processing linguistic content, ranging from just one word up to six words.

She is also studying how language-processing circuits arise in the brain, with ongoing studies in which she and a postdoc in her lab are using fMRI to scan the brains of young children, observing how their language regions behave even before the children have fully learned to speak and understand language.

Large language models (similar to ChatGPT) can help with these types of developmental questions, as the researchers can better control the language inputs to the model and have continuous access to its abilities and representations at different stages of learning.

“You can train models in different ways, on different kinds of language, in different kind of regimens. For example, training on simpler language first and then more complex language, or on language combined with some visual inputs. Then you can look at the performance of these language models on different tasks, and also examine changes in their internal representations across the training trajectory, to test which model best captures the trajectory of human language learning,” Fedorenko says.

To gain another window into how the brain develops language ability, Fedorenko launched the Interesting Brains Project several years ago. Through this project, she is studying people who experienced some type of brain damage early in life, such as a prenatal stroke, or brain deformation as a result of a congenital cyst. In some of these individuals, their conditions destroyed or significantly deformed the brain’s typical language-processing areas, but all of these individuals are cognitively indistinguishable from individuals with typical brains: They still learned to speak and understand language normally, and in some cases, they didn’t even realize that their brains were in some way atypical until they were adults.

“That study is all about plasticity and redundancy in the brain, trying to figure out what brains can cope with, and how” Fedorenko says. “Are there many solutions to build a human mind, even when the neural infrastructure is so different-looking?”

To the brain, Esperanto and Klingon appear the same as English or Mandarin

Within the human brain, a network of regions has evolved to process language. These regions are consistently activated whenever people listen to their native language or any language in which they are proficient.

A new study by MIT researchers finds that this network also responds to languages that are completely invented, such as Esperanto, which was created in the late 1800s as a way to promote international communication, and even to languages made up for television shows such as “Star Trek” and “Game of Thrones.”

To study how the brain responds to these artificial languages, MIT neuroscientists convened nearly 50 speakers of these languages over a single weekend. Using functional magnetic resonance imaging (fMRI), the researchers found that when participants listened to a constructed language in which they were proficient, the same brain regions lit up as those activated when they processed their native language.

“We find that constructed languages very much recruit the same system as natural languages, which suggests that the key feature that is necessary to engage the system may have to do with the kinds of meanings that both kinds of languages can express,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The findings help to define some of the key properties of language, the researchers say, and suggest that it’s not necessary for languages to have naturally evolved over a long period of time or to have a large number of speakers.

“It helps us narrow down this question of what a language is, and do it empirically, by testing how our brain responds to stimuli that might or might not be language-like,” says Saima Malik-Moraleda, an MIT postdoc and the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.

Convening the conlang community

Unlike natural languages, which evolve within communities and are shaped over time, constructed languages, or “conlangs,” are typically created by one person who decides what sounds will be used, how to label different concepts, and what the grammatical rules are.

Esperanto, the most widely spoken conlang, was created in 1887 by L.L. Zamenhof, who intended it to be used as a universal language for international communication. Currently, it is estimated that around 60,000 people worldwide are proficient in Esperanto.

In previous work, Fedorenko and her students have found that computer programming languages, such as Python — another type of invented language — do not activate the brain network that is used to process natural language. Instead, people who read computer code rely on the so-called multiple demand network, a brain system that is often recruited for difficult cognitive tasks.

Fedorenko and others have also investigated how the brain responds to other stimuli that share features with language, including music and nonverbal communication such as gestures and facial expressions.

“We spent a lot of time looking at all these various kinds of stimuli, finding again and again that none of them engage the language-processing mechanisms,” Fedorenko says. “So then the question becomes, what is it that natural languages have that none of those other systems do?”

That led the researchers to wonder if artificial languages like Esperanto would be processed more like programming languages or more like natural languages. Similar to programming languages, constructed languages are created by an individual for a specific purpose, without natural evolution within a community. However, unlike programming languages, both conlangs and natural languages can be used to convey meanings about the state of the external world or the speaker’s internal state.

To explore how the brain processes conlangs, the researchers invited speakers of Esperanto and several other constructed languages to MIT for a weekend conference in November 2022. The other languages included Klingon (from “Star Trek”), Na’vi (from “Avatar”), and two languages from “Game of Thrones” (High Valyrian and Dothraki). For all of these languages, there are texts available for people who want to learn the language, and for Esperanto, Klingon, and High Valyrian, there is even a Duolingo app available.

“It was a really fun event where all the communities came to participate, and over a weekend, we collected all the data,” says Malik-Moraleda, who co-led the data collection effort with former MIT postbac Maya Taliaferro, now a PhD student at New York University.

During that event, which also featured talks from several of the conlang creators, the researchers used fMRI to scan 44 conlang speakers as they listened to sentences from the constructed language in which they were proficient. The creators of these languages — who are co-authors on the paper — helped construct the sentences that were presented to the participants.

While in the scanner, the participants also either listened to or read sentences in their native language, and performed some nonlinguistic tasks for comparison. The researchers found that when people listened to a conlang, the same language regions in the brain were activated as when they listened to their native language.

Common features

The findings help to identify some of the key features that are necessary to recruit the brain’s language processing areas, the researchers say. One of the main characteristics driving language responses seems to be the ability to convey meanings about the interior and exterior world — a trait that is shared by natural and constructed languages, but not programming languages.

“All of the languages, both natural and constructed, express meanings related to inner and outer worlds. They refer to objects in the world, to properties of objects, to events,” Fedorenko says. “Whereas programming languages are much more similar to math. A programming language is a symbolic generative system that allows you to express complex meanings, but it’s a self-contained system: The meanings are highly abstract and mostly relational, and not connected to the real world that we experience.”

Some other characteristics of natural languages, which are not shared by constructed languages, don’t seem to be necessary to generate a response in the language network.

“It doesn’t matter whether the language is created and shaped over time by a community of speakers, because these constructed languages are not,” Malik-Moraleda says. “It doesn’t matter how old they are, because conlangs that are just a decade old engage the same brain regions as natural languages that have been around for many hundreds of years.”

To further refine the features of language that activate the brain’s language network, Fedorenko’s lab is now planning to study how the brain responds to a conlang called Lojban, which was created by the Logical Language Group in the 1990s and was designed to prevent ambiguity of meanings and promote more efficient communication.

The research was funded by MIT’s McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, the Simons Center for the Social Brain, the Frederick A. and Carole J. Middleton Career Development Professorship, and the U.S. National Institutes of Health.