All the connections

Neuroscientists today have the most spectacular views of brains that the field has ever seen. Modern microscopes can reveal extraordinary levels of detail, offering scientists another piece of the vast and intricate puzzle of how neurons interconnect.

A comprehensive wiring diagram of the brain — its connectome — is an atlas for neuroscientists, guiding investigations into how neural circuitry works. Microscope images are the raw data for generating that atlas, but it takes powerful computers and shrewd scientists, like the McGovern Institute’s newest investigator, Sven Dorkenwald, to make sense of it all.

All 139,255 neurons in the brain of an adult fruit fly reconstructed by the FlyWire Consortium, with each neuron uniquely color-coded. Render by Tyler Sloan. Image: Sven Dorkenwald

A monumental task

Many disorders of the human brain are related to breakdowns that affect the connections of neurons with one another. An atlas will help researchers identify and study the function of those connections — down to the level of synapses — and explore what happens when things go wrong. When researchers understand which brain cells interact with one another, they can ask more sophisticated questions about how those cells work together to process information, store memories, or modulate our emotions.

Until recently, generating a complete connectome for any animal was nearly impossible. Electron microscopes capture fine details of cellular structures, down to the slender branches and tiny protrusions that neurons use to reach out and communicate with one another. But to see those features clearly, microscopes have to zoom way in, focusing solely on a thin slice of one small part of the brain at a time.

Isolated images like these don’t reveal much on their own. They are a jumble of bits and pieces of cells — a cross-section removed from the context of its surroundings. Neurons’ paths must be traced through millions of images to reconstruct the brain’s three-dimensional networks and ultimately, reveal how its individual cells connect with one another. This is a monumental task, because even the poppy seed-sized brain of a fruit fly contains more than 50 million synapses.

The fly connectome
The 50 largest neurons in the adult fruit fly reconstructed by the FlyWire Consortium, spearheaded by Dorkenwald. Image: Sven Dorkenwald, Tyler Sloan

Remarkably, all of those connections in the fruit fly’s tiny brain are now mapped, thanks in large part to Dorkenwald’s efforts as a PhD student at Princeton University. Together with professors Sebastian Seung and Mala Murthy, Dorkenwald spearheaded FlyWire, a consortium of hundreds of scientists who charted the circuitry, following the fly’s neurons through 21 million microscope images. Neuroscientists around the world now use that connectome, which was completed in 2024, to understand how information flows through the fruit fly brain and shed light on parallel processes in our own brains.

AI tools and teamwork

Portrait of Sven Dorkenwald
McGovern Investigator Sven Dorkenwald. Photo: Steph Stevens

Getting from millions of microscope images to a complete wiring diagram of the fly brain required the development of innovative new tools and an extraordinary level of teamwork. Dorkenwald, who was recently named one of STAT’s 2025 Wunderkinds, an award that celebrates outstanding early-career scientists, was instrumental in both.

Dorkenwald’s first experience mapping neural circuits was as a physics undergraduate at Heidelberg University, tracing neurons in a targeted area of a zebra finch brain. The lab wanted a map to help them understand how birds learn and repeat their courtship songs. Tracing neurons was, at the time, painstaking work. Dorkenwald and his fellow students would manually follow the path of a single cell as it passed across adjacent microscope images, noting each branch point to return to for further mapping.

Today, the process has accelerated greatly, with artificial intelligence (AI) tools taking over most of the work. But those tools make mistakes, and it’s up to humans to find and correct them.

Dorkenwald encountered this obstacle as a graduate student in Seung’s lab at Princeton, where he studied computer science and neuroscience. Before FlyWire, the lab was part of a collaborative effort called the MICrONS consortium, which included teams at the Allen Institute and Baylor College of Medicine, that aimed to map all the connections within a cubic millimeter of the mouse visual cortex. Size alone made this a daunting task: a cubic millimeter of a mouse brain is ten times the size of a fly brain. Dorkenwald and colleagues developed the infrastructure the team needed to proofread and analyze the same dataset.

Their system, which they call CAVE (Connector Annotation Versioning Engine), allowed the team to expand its proofreading community far beyond the three labs who drove the project, involving many neuroscientists who were interested in different parts of the circuitry. “We basically opened up this dataset to anybody who wanted to join,” Dorkenwald says. When they later deployed CAVE to enable community-wide proofreading for the fly connectome, citizen scientists got involved, and paid proofreaders joined the mix to fill in gaps in the map. It has since become an essential tool in the connectomics field.

The MICrONS consortium ultimately reconstructed more than a half billion synapses in that cubic millimeter of mouse tissue. What’s more, researchers added another level of information to the map, incorporating data on neuronal activity recorded from the very mouse whose brain had been imaged for the project enabling new studies that relate a circuit structure with its function. These results, published earlier this year, represent another milestone for the field.

An image of an orange neuron emerging from black and white brain slices.
A single neuron reconstructed from thousands of serial section electron microscope images of the mouse visual cortex for the MICrONS consortium. Image: Sven Dorkenwald

Dorkenwald says this newly mapped piece of the mouse connectome is large enough that scientists can begin to see and analyze neural circuits. Still, zeroing in on a cubic millimeter within the mouse’s pea-sized brain means most of what’s visible is parts of cells, which can leave scientists struggling to identify exactly what they’re looking at. Dorkenwald says bits of cells can reveal their identities with their particular shapes and ultrastructural contents, such as vesicles and mitochondria. However, humans can’t necessarily make sense of these subtle features on their own. An AI tool that he developed called SegCLR (segmentation-guided contrastive learning of representations) decodes these clues.

SegCLR is one way Dorkenwald is applying his computational expertise to make sense of connectomes and integrate new kinds of information into the maps — work that he continued as a fellow at the Allen Institute after earning his PhD at Princeton.

“A connectome alone is not enough,” he says. “If you would just look at a connectome of a brain, it would look like white noise at first. You have to put order into the system to understand its parts.”

Searching for meaning

In January 2026, Dorkenwald will join MIT as an assistant professor of brain and cognitive sciences and an investigator at the McGovern Institute. He will be digging into the connectomes he has helped produce, developing new computational approaches to look for organizational principles within the circuitry. “We will be asking hard questions about the circuits we reconstruct,” he says. “The connections that we are seeing contribute to interesting and important computations. What are the circuit motifs that allow them to do that? What’s the architecture of the circuit within layers, across layers, and ultimately, across regions? That is what I want to get at.”

An infographic comparing the fruit fly brain to the mouse brain.

While there’s plenty of data to work with, he’s also eager to continue scaling up connectomics. He thinks a complete connectome of the mouse brain is achievable within 10 to 15 years — but it’s going to require a lot of collaboration. “The area we’re working in is still very new,” he says. “There’s a lot of room to approach things in new ways and solve problems that are very large, in ways that move an entire field forward.”

As the technology advances, Dorkenwald plans to compare connectomes across individuals to better understand variations in circuitry, including the changes that occur in individuals with neurological or psychiatric disorders.

To help make that possible, he’s going to design new AI approaches to automate proofreading, which remains a bottleneck for connectomics. Even a community-wide effort will be too slow to manually proofread a map of the entire mouse brain, so this step will also need to be automated. For this, Dorkenwald will turn to data from past proofreaders, who have already made millions of manual edits to connectomes. Dorkenwald plans to train AI tools to mimic their work.

Dorkenwald says his career in connectomics began with a sense of wonder, back when he was tracing neurons through images of the zebra finch brain. “Every time you asked about what is in there, and nobody knew, there was so much that felt undiscovered,” he remembers. Now, he’s making all the information hidden within those images more accessible: “If we can just extract it, I think we can make sense of it.”

Celebrating worm science

For decades, scientists with big questions about biology have found answers in a tiny worm. That worm–a millimeter-long creature called Caenorhabditis elegans–has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel prizes and have led to the development of new treatments for human disease.

Portrait of Robert Horvitz at a computer.
McGovern Investigator Robert Horvitz shared the 2002 Nobel Prize in Medicine with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Photo: AP Images/Aynsley Floyd

In a perspective piece published in the November 2025 issue of the journal PNAS, eleven biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health and highlight how a uniquely collaborative community among worm researchers has fueled the field.

MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andy Fire and Paul Sternberg, now at Stanford University and the California Institute of Technology, and two past postdoctoral researchers in Horvitz’s lab, University of Massachusetts Medical School professor Victor Ambros and Massachusetts General Hospital investigator Gary Ruvkun. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.

Early worm discoveries

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Medicine along with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Horvitz is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research as well as an investigator at the Howard Hughes Medical Institute.

Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.

Microscopic image of C. elegans roundworm with cells highlighted in pink and green.
Caenorhabditis elegans, a transparent roundworm only 1mm in length, has provided answers to many fundamental questions in biology. Image: Robert Horvitz

Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans. “Many aspects of biology are ancient and evolutionarily conserved,” Horvitz explains. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”

In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.

In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood and are instead eliminated by a process termed programmed cell death.

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz.

By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.

In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.

Collaborative worm community

Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists—including many who trained in Horvitz’s lab—has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:

  • Andrew Fire and Craig Mello, whose discovery of an RNA-based system of gene silencing led to powerful new tools to manipulate gene activity. The innate process they discovered in worms, known as RNA interference, is now used as the basis of six FDA-approved therapeutics for genetic disorders, silencing faulty genes to stop their harmful effects.
  • Martin Chalfie, who used a fluorescent protein made by jellyfish to visualize and track specific cells in C. elegans, helping launch the development of a set of tools that transformed biologists’ ability to observe molecules and processes that are important for both health and disease.
  • Victor Ambros and Gary Ruvkun, who discovered a class of molecules called microRNAs that regulate gene activity not just in worms, but in all multicellular organisms. This prize-winning work was started when Ambros and Ruvkun were postdoctoral researchers in Horvitz’s lab. Humans rely on more than 1,000 microRNAs to ensure our genes are used at the right times and places. Disruptions to microRNAs have been linked to neurological disorders, cancer, cardiovascular disease, and autoimmune disease, and researchers are now exploring how these small molecules might be used for diagnosis or treatment.

Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.

Today, scientists who study C. elegans—whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems—contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.

WormAtlas provides users with numerous anatomical resources including tools to view electron microscopy slices of the same cell. Image: WormAtlas.org

Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.

C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.

Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees. As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.

 

Who discovered neurons?

A self-portrait of Santiago Ramón y Cajal looking through a microscope.
A self-portrait of Santiago Ramón y Cajal looking through a microscope. Image: CC 2.0

On this day, December 10th, nearly 120 years ago, Santiago Ramón y Cajal received a Nobel Prize for capturing and interpreting the very first images of the brain’s most essential components — neurons.

“Many scientists consider Cajal the progenitor of neuroscience because he was the first to really see the brain for what it was: a computational engine made up of individual units,” says Mark Harnett, an investigator at the McGovern Institute and an associate professor in the Department of Brain and Cognitive Sciences. His lab explores how the biophysical features of neurons enable them to perform complex computations that drive thought and behavior.

For Harnett, Cajal is one of the greatest scientific minds to have helped us understand ourselves and our place in the world. Cajal was the first to uncover what neurons look like and propose how they function — equipping the field to solve a slew of the mind’s mysteries. Scientists built on this framework to learn how these remarkable cells relay information — by zapping electrical signals to each other — so we can think, feel, move, communicate, and create.

From art to science and back again

Cajal was born on May 1, 1852, in a small village nestled in the Spanish countryside. It was there Cajal fell deeply and madly in love with … art. But his father was a physician, and urged him to trade his sketches for a scalpel. Begrudgingly, Cajal eventually did. After graduating from medical school in 1873, he worked as an army doctor, but around 1880, he turned his attention to studying the nervous system.

An illustration of a brain cell.
A Purkinje neuron from the human cerebellum. Image: Cajal Institute (CSIC), Madrid

Nineteenth-century scientists didn’t think of the brain as a network of cells but more as plumbing, like the blood vessels in the circulatory system — a series of hollow tubes through which information somehow flowed. Cajal and others were skeptical of this perspective, yet had no way of visualizing the brain at a detailed, cellular level to confirm their suspicions. Scientists at the time stained thin slices of tissue to make cells visible under a microscope, but even the most sophisticated methods stained all cells at once, leaving an indecipherable mass under the microscope’s lens.

This changed in 1887 when Cajal encountered a technique devised by Camillo Golgi that stained only some cells. “Rather than seeing all the cells simultaneously, you saw one at a time,” Harnett explains, making it easier to view a cell’s precise form (Golgi shared the 1906 Nobel Prize with Cajal for this method). If he could refine Golgi’s approach and apply it to neural tissue, Cajal thought, he might finally determine the brain’s architecture.

When he did, a remarkable landscape appeared — black bulbs with sprawling branches, each casting a stringy silhouette. The scene awakened a prior passion. While viewing brain slices under a microscope, Cajal drew what he saw, with surgical precision and an artist’s eye. He had captured — for the first time — the mind’s timberland of cells.

A new theory of the mind

Cajal’s illustrations revealed that brain cells did not form a singular plumbing network, but were distinctly separate, with small gaps between them. “This completely upended what people at the time thought about the brain,” Harnett explains. “It wasn’t made up of connected tubes, but individual cells,” which a few years later in 1891 would be called neurons. Over nearly five decades Cajal created around 2,900 drawings — a collage of neurons from humans and a menagerie of fauna: mice, pigeons, lizards, newts, and fish — spanning a host of cell types, from Purkinje cells to basket and chandelier interneurons.

“Part of Cajal’s genius was that he proposed what the incredible anatomical diversity among neurons meant. He reasoned that maybe one part of the cell could work like an antenna to take in signals, and another might be a cable to send signals out. Cajal was already thinking about input and output at neurons, and synapses as points of contact between them,” Harnett notes. “Each neuron becomes a very complex engine for computation, as opposed to tube-based things that can’t really compute.”

Cajal’s notion that the brain was a network of individual cells would come to be known as the neuron doctrine, a bedrock principle that underlies all of neuroscience today. In his autobiography, Cajal describes neurons as “the mysterious butterflies of the soul, the beating of whose wings may someday – who knows? – clarify the secret of mental life.” And in many ways, they have.

One of thousands of neuron illustrations created by Santiago Ramón y Cajal. Image: CC 2.0

One scientist’s enduring influence

Much of scientists’ current approach to studying the brain is guided by Cajal’s blueprint. This is certainly true for the Harnett lab. “As many in the field do, we share Cajal’s aspiration to apply cutting-edge imaging to reveal hidden aspects of the brain and hypothesize about their function,” Harnett says. “Thankfully, unlike Cajal, we now have the advantage of functional tests to try to validate our hypotheses.”

An ultra high resolution image of a neuron taken by the Harnett lab. Image: Mark Harnett

In a study published in 2022, the Harnett lab used a super-resolution imaging tool to find that filopodia — tiny structures that protrude from dendrites (the signal-receiving “antennas” of neurons) — were far more abundant in the brain than previously thought. Through a battery of tests, they found that these “silent synapses” can become active to facilitate new neural connections. Such pliable sites were believed to only be present very early in life, but the researchers observed filopodia in adult mice, suggesting that they support continuous learning and computational flexibility over the lifespan.

Harnett explains that Cajal’s impact extends beyond neuroscience. “Where does the power of artificial intelligence (AI) come from? It comes, originally, from Cajal.” It’s no wonder, he says, that AI uses neural networks — a mimicry of one of nature’s most powerful designs, first described by Cajal. “The idea that neurons are computational units is really critical to the power and complexity you can achieve within a network. Cajal even hypothesized that changing the strength of signaling between neurons was how learning worked, an idea that was later validated and became one of the critical insights for revolutionizing deep learning in AI.”

By unveiling what’s really happening beneath our skulls, Cajal’s work would both motivate and guide studies of the brain for over a hundred years to come. “Many of his early hypotheses have proven to be true decades and decades later,” Harnett says. “He has and continues to inspire generations of neuroscientists.”

 

 

When it comes to language, context matters

In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.

Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.

Portrait of McGovern Investigator Evelina Fedorenko in a black shirt with soft white lights in background. Photo: Alexandra Sokhina
McGovern Investigator Evelina Fedorenko. Photo: Alexandra Sokhina

“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.

One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.

Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.

The importance of context

Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.

“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.

As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.

“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”

About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.

One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.

This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.

To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.

The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.

“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.

Components of pragmatic ability

The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.

With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.

In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.

This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.

“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.

The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation.

Astrocyte diversity across space and time

McGovern Investigator Guoping Feng. Photo: Justin Knight

When it comes to brain function, neurons get a lot of the glory. But healthy brains depend on the cooperation of many kinds of cells. The most abundant of the brain’s non-neuronal cells are astrocytes, star-shaped cells with a lot of responsibilities. Astrocytes help shape neural circuits, participate in information processing, and provide nutrient and metabolic support to neurons. Individual cells can take on new roles throughout their lifetimes, and at any given time, the astrocytes in one part of the brain will look and behave differently than the astrocytes somewhere else.

After an extensive analysis by scientists at MIT’s McGovern Institute, neuroscientists now have an atlas detailing astrocytes’ dynamic diversity. Its maps depict the regional specialization of astrocytes across the brains of both mice and marmosets—two powerful models for neuroscience research—and show how their populations shift as brains develop, mature, and age. The study, reported in the November 20 issue of the journal Neuron, was led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT. This work was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research, part of the Yang Tan Collective at MIT, and the National Institutes of Health’s BRAIN Initiative.

Probing the unknown

“It’s really important for us to pay attention to non-neuronal cells’ role in health and disease,” says Feng, who is also the associate director of the McGovern Institute, the director of the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and a member of the Broad Institute of MIT and Harvard. And indeed, these cells—once seen as merely supporting players—have gained more of the spotlight in recent years. Astrocytes are known to play vital roles in the brain’s development and function, and their dysfunction seems to contribute to many psychiatric disorders and neurodegenerative diseases. “But compared to neurons, we know a lot less—especially during development,” Feng adds.

Feng and Margaret Schroeder, a former graduate student in his lab, thought it was important to understand astrocyte diversity across three axes: space, time, and species. They knew from earlier work in the lab, done in collaboration with Steve McCarroll’s lab at Harvard and led by Fenna Krienen in his group, that in adult animals, different parts of the brain have distinctive sets of astrocytes.

“The natural question was, how early in development do we think this regional patterning of astrocytes starts?” Schroeder says.

To find out, she and her colleagues collected brain cells from mice and marmosets at six stages of life, spanning embryonic development to old age. For each animal, they sampled cells from four different brain regions: the prefrontal cortex, the motor cortex, the striatum, and the thalamus.

Then, working with Krienen, who is now an assistant professor at Princeton University, they analyzed the molecular contents of those cells, creating a profile of genetic activity for each one. That profile was based on the mRNA copies of genes found inside the cell, which are known collectively as the cell’s transcriptome. Determining which genes a cell is using and how active those genes are gives researchers insight into a cell’s function and is one way of defining its identity.

Dynamic diversity

After assessing the transcriptomes of about 1.4 million brain cells, the group focused in on the astrocytes, analyzing and comparing their patterns of gene expression. At every life stage, from before birth to old age, the team found regional specialization: Astrocytes from different brain regions had similar patterns of gene expression, which were distinct from those of astrocytes in other brain regions.

This regional specialization was also apparent in the distinct shapes of astrocytes in different parts of the brain, which the team was able to see with expansion microscopy, a high-resolution imaging method developed by McGovern colleague Edward Boyden that reveals fine cellular features.

Notably, the astrocytes in each region changed as animals matured. “When we looked at our late embryonic time point, the astrocytes were already regionally patterned. But when we compare that to the adult profiles, they had completely shifted again,” Schroeder says. “So there’s something happening over postnatal development.” The most dramatic changes the team detected occurred between birth and early adolescence, a period during which brains rapidly rewire as animals begin to interact with the world and learn from their experiences.

Maps generated by Feng’s team depict the regional specialization of astrocytes across the brains of both mice and marmosets—two powerful models for neuroscience research—and show how their populations shift as brains develop, mature, and age.

Feng and Schroeder suspect that the changes they observed may be driven by the neural circuits that are sculpted and refined as the brain matures. “What we think they’re doing is kind of adapting to their local neuronal niche,” Schroeder says. “The types of genes that they are upregulating and changing during development points to their interaction with neurons.” Feng adds that astrocytes may change their genetic programs in response to nearby neurons, or alternatively, they might help direct the development or function of local circuits as they adopt identities best suited to support particular neurons.

Both mouse and marmoset brains exhibited regional specialization of astrocytes and changes in those populations over time. But when the researchers looked at the specific genes whose activity defined various astrocyte populations, the data from the two species diverged. Schroeder calls this a note of caution for scientists who study astrocytes in animal models, and adds that the new atlas will help researchers assess the potential relevance of findings across species.

Beyond astrocytes

With a new understanding of astrocyte diversity, Feng says his team will pay close attention to how these cells are impacted by the disease-related genes they study and how those effects change during development. He also notes that the gene expression data in the atlas can be used to predict interactions between astrocytes and neurons. “This will really guide future experiments: how these cells’ interactions can shift with changes in the neurons or changes in the astrocytes,” he says.

The Feng lab is eager for other researchers to take advantage of the massive amounts of data they generated as they produced their atlas. Schroeder points out that the team analyzed the transcriptomes of all kinds of cells in the brain regions they studied, not just astrocytes. They are sharing their findings so researchers can use them to understand when and where specific genes are used in the brain, or dig in more deeply to further to explore the brain’s cellular diversity.

 

The cost of thinking

Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.

A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these—and remarkably, scientists at MIT’s McGovern Institute have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report in the November 18 issue of the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.

The researchers, who were led by McGovern Institute Investigator Evelina Fedorenko, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says.

“The fact that there’s some convergence is really quite striking.” — Evelina Fedorenko

Reasoning models

Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well—and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.

“Up until recently, I was among the people saying, ‘these models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” says Fedorenko, who is also an associate professor of brain and cognitive sciences at MIT. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”

Computational neuroscientist Andrea Gregor de Varda is a K. Lisa Yang ICoN Center Fellow and a postdoctoral researcher in Evelina Fedorenko’s lab. Photo: Steph Stevens

Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoctoral researcher in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”

To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”

Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before—but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.

The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.

Time vs. tokens

This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains.

“It’s as if they were talking to themselves.” — Andrea Gregor de Varda

Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it—and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.

Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.

De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.

The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.

Different bodies, similar strategies to maintain balance

Nidhi Seethapathi is an associate investigator at the McGovern Institute as well as the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT.

With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.

Now, scientists at MIT’s McGovern Institute have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.

McGovern Associate Investigator Nidhi Seethapathi and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published October 21, 2025, in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion – bridging the gap between animal models and human balance.

Corrective action

Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.

“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT.

While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.

Antoine DeComite is a K. Lisa Yang ICoN Postdoctoral Fellow in Nidhi Seethapathi’s lab at the McGovern Institute. Photo: Steph Stevens

To find out, Seethapathi and De Comite, who is a postdoctoral research in both Seethapathi’s and Guoping Feng’s labs, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species which is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room—not on a treadmill or over unusual terrain.

Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.

By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation—at any given moment—is the error.”

“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite.

“The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”

The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.

Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.

Studying how brains help animals move stably may also guide the development of more targeted strategies to help people improve their balance and, ultimately, prevent falls.

“In elderly individuals and individuals with sensorimotor disorders , minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says.

Identifying kids who need help learning to read isn’t as easy as A, B, C

In most states, schools are required to screen students as they enter kindergarten — a process that is meant to identify students who may need extra help learning to read. However, a new study by MIT researchers suggests that these screenings may not be working as intended in all schools.

The researchers’ survey of about 250 teachers found that many felt they did not receive adequate training to perform the tests, and about half reported that they were not confident that children who need extra instruction in reading end up receiving it.

When performed successfully, these screens can be essential tools to make sure children get the extra help they need to learn to read. However, the new findings suggest that many school districts may need to tweak how they implement the screenings and analyze the results, the researchers say.

“This result demonstrates the need to have a systematic approach for how the basic science on how children learn to read is translated into educational opportunity,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli is the senior author of the new open-access study, which appears today in Annals of Dyslexia. Ola Ozernov-Palchik, an MIT research scientist who is also a research assistant professor at Boston University Wheelock College of Education and Human Development, is the lead author of the study.

Boosting literacy

Over the past 20 years, national reading proficiency scores in the United States have trended up, but only slightly. In 2022, 33 percent of fourth-graders achieved reading proficiency, compared to 29 percent in 1992, according to the National Assessment of Educational Progress reading report card. (The highest level achieved in the past 20 years was 37 percent, in 2017.)

In hopes of boosting those rates, most states have passed laws requiring students to be screened for potential reading struggles early in elementary school. In most cases, the screenings are required two or three times per year, in kindergarten, first grade, and second grade.

These tests are designed to identify students who have difficulty with skills such as identifying letters and the sounds they make, blending sounds to make words, and recognizing words that rhyme. Students with low scores in these measures can then be offered extra interventions designed to help them catch up.

“The indicators of future reading disability or dyslexia are present as early as within the first few months of kindergarten,” Ozernov-Palchik says. “And there’s also an overwhelming body of evidence showing that interventions are most effective in the earliest grades.”

In the new study, the researchers wanted to evaluate how effectively these screenings are being implemented in schools. With help from the National Center for Improving Literacy, they posted on social media sites seeking classroom teachers and reading specialists who are responsible for administering literacy screening tests.

The survey respondents came from 39 states and represented public and private schools, located in urban, suburban, and rural areas. The researchers asked those teachers dozens of questions about their experience with the literacy screenings, including questions about their training, the testing process itself, and the results of the screenings.

One of the significant challenges reported by the respondents was a lack of training. About 75 percent reported that they received fewer than three hours of training on how to perform the screens, and 44 percent received no training at all or less than an hour of training.

“Under ideal conditions, there is an expert who trains the educators, they provide practice opportunities, they provide feedback, and they observe the educators administer the assessment,” Ozernov-Palchik says. “None of this was done in many of the cases.”

Instead, many educators reported that they spent their own time figuring out how to give the evaluations, sometimes working with colleagues. And, new hires who arrived at a school after the initial training was given were often left on their own to figure it out.

Another major challenge was suboptimal conditions for administering the tests. About 80 percent of teachers reported interruptions during the screenings, and 40 percent had to do the screens in noisy locations such as a school hallway. More than half of the teachers also reported technical difficulties in administering the tests, and that rate was higher among teachers who worked at schools with a higher percentage of students from low socioeconomic (SES) backgrounds.

Teachers also reported difficulties when it came to evaluating students categorized as English language learners (ELL). Many teachers relayed that they hadn’t been trained on how to distinguish students who were having trouble reading from those who struggled on the tests because they didn’t speak English well.

“The study reveals that there’s a lot of difficulty understanding how to handle English language learners in the context of screening,” Ozernov-Palchik says. “Overall, those kids tend to be either over-identified or under-identified as needing help, but they’re not getting the support that they need.”

Unrealized potential

Most concerning, the researchers say, is that in many schools, the results of the screening tests are not being used to get students the extra help that they need. Only 44 percent of the teachers surveyed said that their schools had a formal process for creating intervention plans for students after the screening was performed.

“Even though most educators said they believe that screening is important to do, they’re not feeling that it has the potential to drive change the way that it’s currently implemented,” Ozernov-Palchik says.

In the study, the researchers recommended several steps that state legislatures or individual school districts can take to make the screening process run more smoothly and successfully.

“Implementation is the key here,” Ozernov-Palchik says. “Teachers need more support and professional development. There needs to be systematic support as they administer the screening. They need to have designated spaces for screening, and explicit instruction in how to handle children who are English language learners.”

The researchers also recommend that school districts train an individual to take charge of interpreting the screening results and analyzing the data, to make sure that the screenings are leading to improved success in reading.

In addition to advocating for those changes, the researchers are also working on a technology platform that uses artificial intelligence to provide more individualized instruction in reading, which could help students receive help in the areas where they struggle the most.

The research was funded by Schmidt Futures, the Chan Zuckerberg Initiative for the Reach Every Reader project, and the Halis Family Foundation.

New MIT initiative seeks to transform rare brain disorders research

More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.

Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill the gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT’s McGovern Institute, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.

“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”

Building new coalitions

Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.

Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”

Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute for Brain Research at MIT, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.

Image of SHANK3 therapy correctly finding its way to dendrites. Image: Guoping Feng
An early version of a gene therapy for SHANK3 mutations — linked to a rare brain disorder called Phelan-McDermid syndrome — correctly finds its way to neurons. Image: Feng lab

RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.

MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.

These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”

“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long the rare brain disorders community has been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions and to do so at a moment when it’s needed more than ever.”

 

MIT cognitive scientists reveal why some sentences stand out from others

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

Greta Tuckute, a former graduate student in the Fedorenko lab. Photo: Caitlin Cunningham

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.