Learning from each other

Experience is a powerful teacher—and not every experience has to be our own to help us understand the world. What happens to others is instructive, too. That’s true for humans as well as for other social animals. New research from scientists at the McGovern Institute shows what happens in the brains of monkeys as they integrate their observations of others with knowledge gleaned from their own experience.

“The study shows how you use observation to update your assumptions about the world,” explains McGovern Institute Investigator Mehrdad Jazayeri, who led the research. His team’s findings, published in the January 7 issue of the journal Nature, also help explain why we tend to weigh information gleaned from observation and direct experience differently when we make decisions. Jazayeri is also a professor of brain and cognitive sciences at MIT and an investigator at the Howard Hughes Medical Institute.

“As humans, we do a large part of our learning through observing other people’s experiences and what they go through and what decisions they make,” says Setayesh Radkani, a graduate student in Jazayeri’s lab. For example, she says, if you get sick after eating out, you might wonder if the food at the restaurant was to blame. As you consider whether it’s safe to return, you’ll likely take into account whether the friends you’d dined with got sick too. Your experiences as well as those of your friends will inform your understanding of what happened.

The research team wanted to know how this works: When we make decisions that draw on both direct experience and observation, how does the brain combine the two kinds of evidence? Are the two kinds of information handled differently?

Social experiment

It is hard to tease out the factors that influence social learning. “When you’re trying to compare experiential learning versus observational learning, there are a ton of things that can be different,” Radkani says. For example, people may draw different conclusions about someone else’s experiences than their own, because they know less about that person’s motivations and beliefs. Factors like social status, individual differences, and emotional states can further complicate these situations and be hard to control for, even in a lab.

To create a carefully controlled scenario in which they could focus on how observation changes our understanding of the world, Radkani and postdoctoral fellow Michael Yoo devised a computer game that would allow two players to learn from one another through their experiences. They taught this game to both humans and monkeys.

Their approach, Jazayeri says, goes far beyond the kinds of tasks that are typically studied in a neuroscience lab. “I think it might be one of the most sophisticated tasks monkeys have been trained to perform in a lab,” he says.

Both monkeys and humans played the game in pairs. The object was to collect enough tokens to earn a reward. Players could choose to enter either of two virtual arenas to play—but in one of the two arenas, tokens had no value. In that arena, no matter how many tokens a player collected, they could not win. Players were not told which arena was which, and the winnable and unwinnable arenas sometimes swapped without warning.

Only one individual played at a time, but regardless of who was playing, both individuals watched all of the games. So as either player collected tokens and either did or did not receive a reward, both the player and the observer got the same information. They could use that information to decide which arena to choose in their next round.

Experience outweighs observation

Humans and monkeys have sophisticated social intelligence and both clearly took their partners’ experiences into account as they played the game. But the researchers found that the outcomes of a player’s own games had a stronger influence on each individual’s choice of arena than the outcomes of their partner’s games. “They seem to learn less efficiently from observation, suggesting they tend to devalue the observational evidence,” Radkani says. That distinction was reflected in the patterns of neural activity that the team detected in the brains of the monkeys.

Postdoctoral fellow Ruidong Chen and research assistant Neelima Valluru recorded signals from a part of the brain’s frontal lobe called the anterior cingulate cortex (ACC) as the monkeys played the game. The ACC is known to be involved in social processing. It also integrates information gained through multiple experiences, and seems to use this to update an animal’s beliefs about the world. Prior to the Jazayeri lab’s experiments, this integrative function had only been linked to animals’ direct experiences—not their observations of others.

Consistent with earlier studies, neurons in the ACC changed their activity patterns both when the monkeys played the game and when they watched their partner take a turn. But these signals were complex and variable, making it hard to discern the underlying logic. To tackle this challenge, Chen recorded neural activity from large groups of neurons in both animals across dozens of experiments. “We also had to devise new analysis methods to crack the code and tease out the logic of the computation,” Chen says.

One of the researchers’ central questions was how information about self and other makes its way to the ACC. The team reasoned that there were two possibilities: either the ACC receives a single input on each trial specifying who is acting, or it receives separate input streams for self and other. To test these alternatives, they built artificial neural network models organized both ways and analyzed how well each model matched their neural data. The results suggested that the ACC receives two distinct inputs, one reflecting evidence acquired through direct experience and one reflecting evidence acquired through observation.

The team also found a tantalizing clue about why the brain tends to trust firsthand experiences more than observations. Their analysis showed that the integration process in the ACC was biased toward direct experience. As a result, both humans and monkeys cared more about their own experiences than the experiences of their partner.

Jazayeri says the study paves the way to deeper investigations of how the brain drives social behavior. Now that his team has examined one of the most fundamental features of social learning, they plan to add additional nuance to their studies, potentially exploring how different abilities or the social relationships between animals influence learning.

“Under the broad umbrella of social cognition, this is like step zero,” he says. “But it’s a really important step, because it begins to provide a basis for understanding how the brain represents and uses social information in shaping the mind.”

This research was supported in part by the Yang Tan Collective at MIT.

New study suggests a way to rejuvenate the immune system

As people age, their immune system function declines. T cell populations become smaller and can’t react to pathogens as quickly, making people more susceptible to a variety of infections.

To try to overcome that decline, researchers at MIT and the Broad Institute have found a way to temporarily program cells in the liver to improve T-cell function. This reprogramming can compensate for the age-related decline of the thymus, where T cell maturation normally occurs.

Using mRNA to deliver three key factors that usually promote T-cell survival, the researchers were able to rejuvenate the immune systems of mice. Aged mice that received the treatment showed much larger and more diverse T cell populations in response to vaccination, and they also responded better to cancer immunotherapy treatments. Their findings are published in the December 17 issue of the journal Nature.

If developed for use in patients, this type of treatment could help people lead healthier lives as they age, the researchers say.

“If we can restore something essential like the immune system, hopefully we can help people stay free of disease for a longer span of their life,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.

Zhang, who is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad Institute of MIT and Harvard, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, is the senior author of the new study. Former MIT postdoc Mirco Friedrich is the lead author of the paper, which appears today in Nature.

A temporary factory

The thymus, a small organ located in front of the heart, plays a critical role in T-cell development. Within the thymus, immature T cells go through a checkpoint process that ensures a diverse repertoire of T cells. The thymus also secretes cytokines and growth factors that help T cells to survive.

However, starting in early adulthood, the thymus begins to shrink. This process, known as thymic involution, leads to a decline in the production of new T cells. By the age of approximately 75, the thymus is greatly reduced.

“As we get older, the immune system begins to decline. We wanted to think about how can we maintain this kind of immune protection for a longer period of time, and that’s what led us to think about what we can do to boost immunity,” Friedrich says.

Previous work on rejuvenating the immune system has focused on delivering T cell growth factors into the bloodstream, but that can have harmful side effects. Researchers are also exploring the possibility of using transplanted stem cells to help regrow functional tissue in the thymus.

The MIT team took a different approach: They wanted to see if they could create a temporary “factory” in the body that would generate the T-cell-stimulating signals that are normally produced by the thymus.

“Our approach is more of a synthetic approach,” Zhang says. “We’re engineering the body to mimic thymic factor secretion.”

For their factory location, they settled on the liver, for several reasons. First, the liver has a high capacity for producing proteins, even in old age. Also, it’s easier to deliver mRNA to the liver than to most other organs of the body. The liver was also an appealing target because all of the body’s circulating blood has to flow through it, including T cells.

To create their factory, the researchers identified three immune cues that are important for T-cell maturation. They encoded these three factors into mRNA sequences that could be delivered by lipid nanoparticles. When injected into the bloodstream, these particles accumulate in the liver and the mRNA is taken up by hepatocytes, which begin to manufacture the proteins encoded by the mRNA.

The factors that the researchers delivered are DLL1, FLT-3, and IL-7, which help immature progenitor T cells mature into fully differentiated T cells.

Immune rejuvenation

Tests in mice revealed a variety of beneficial effects. First, the researchers injected the mRNA particles into 18-month-old mice, equivalent to humans in their 50s. Because mRNA is short-lived, the researchers gave the mice multiple injections over four weeks to maintain a steady production by the liver.

After this treatment, T cell populations showed significant increases in size and function.

The researchers then tested whether the treatment could enhance the animals’ response to vaccination. They vaccinated the mice with ovalbumin, a protein found in egg whites that is commonly used to study how the immune system responds to a specific antigen. In 18-month-old mice that received the mRNA treatment before vaccination, the researchers found that the population of cytotoxic T-cells specific to ovalbumin doubled, compared to mice of the same age that did not receive the mRNA treatment.

The mRNA treatment can also boost the immune system’s response to cancer immunotherapy, the researchers found. They delivered the mRNA treatment to 18-month-old mice, who were then implanted with tumors and treated with a checkpoint inhibitor drug. This drug, which targets the protein PD-L1, is designed to help take the brakes off the immune system and stimulate T cells to attack tumor cells.

Mice that received the treatment showed much higher survival rates and longer lifespan that those that received the checkpoint inhibitor drug but not the mRNA treatment.

The researchers found that all three factors were necessary to induce this immune enhancement; none could achieve all aspects of it on their own. They now plan to study the treatment in other animal models and to identify additional signaling factors that may further enhance immune system function. They also hope to study how the treatment affects other immune cells, including B cells.

Other authors of the paper include Julie Pham, Jiakun Tian, Hongyu Chen, Jiahao Huang, Niklas Kehl, Sophia Liu, Blake Lash, Fei Chen, Xiao Wang, and Rhiannon Macrae.

The research was funded, in part, by the Howard Hughes Medical Institute, the K. Lisa Yang Brain-Body Center, part of the Yang Tan Collective at MIT, Broad Institute Programmable Therapeutics Gift Donors, the Pershing Square Foundation, J. and P. Poitras, and an EMBO Postdoctoral Fellowship.

All the connections

Neuroscientists today have the most spectacular views of brains that the field has ever seen. Modern microscopes can reveal extraordinary levels of detail, offering scientists another piece of the vast and intricate puzzle of how neurons interconnect.

A comprehensive wiring diagram of the brain — its connectome — is an atlas for neuroscientists, guiding investigations into how neural circuitry works. Microscope images are the raw data for generating that atlas, but it takes powerful computers and shrewd scientists, like the McGovern Institute’s newest investigator, Sven Dorkenwald, to make sense of it all.

All 139,255 neurons in the brain of an adult fruit fly reconstructed by the FlyWire Consortium, with each neuron uniquely color-coded. Render by Tyler Sloan. Image: Sven Dorkenwald

A monumental task

Many disorders of the human brain are related to breakdowns that affect the connections of neurons with one another. An atlas will help researchers identify and study the function of those connections — down to the level of synapses — and explore what happens when things go wrong. When researchers understand which brain cells interact with one another, they can ask more sophisticated questions about how those cells work together to process information, store memories, or modulate our emotions.

Until recently, generating a complete connectome for any animal was nearly impossible. Electron microscopes capture fine details of cellular structures, down to the slender branches and tiny protrusions that neurons use to reach out and communicate with one another. But to see those features clearly, microscopes have to zoom way in, focusing solely on a thin slice of one small part of the brain at a time.

Isolated images like these don’t reveal much on their own. They are a jumble of bits and pieces of cells — a cross-section removed from the context of its surroundings. Neurons’ paths must be traced through millions of images to reconstruct the brain’s three-dimensional networks and ultimately, reveal how its individual cells connect with one another. This is a monumental task, because even the poppy seed-sized brain of a fruit fly contains more than 50 million synapses.

The fly connectome
The 50 largest neurons in the adult fruit fly reconstructed by the FlyWire Consortium, spearheaded by Dorkenwald. Image: Sven Dorkenwald, Tyler Sloan

Remarkably, all of those connections in the fruit fly’s tiny brain are now mapped, thanks in large part to Dorkenwald’s efforts as a PhD student at Princeton University. Together with professors Sebastian Seung and Mala Murthy, Dorkenwald spearheaded FlyWire, a consortium of hundreds of scientists who charted the circuitry, following the fly’s neurons through 21 million microscope images. Neuroscientists around the world now use that connectome, which was completed in 2024, to understand how information flows through the fruit fly brain and shed light on parallel processes in our own brains.

AI tools and teamwork

Portrait of Sven Dorkenwald
McGovern Investigator Sven Dorkenwald. Photo: Steph Stevens

Getting from millions of microscope images to a complete wiring diagram of the fly brain required the development of innovative new tools and an extraordinary level of teamwork. Dorkenwald, who was recently named one of STAT’s 2025 Wunderkinds, an award that celebrates outstanding early-career scientists, was instrumental in both.

Dorkenwald’s first experience mapping neural circuits was as a physics undergraduate at Heidelberg University, tracing neurons in a targeted area of a zebra finch brain. The lab wanted a map to help them understand how birds learn and repeat their courtship songs. Tracing neurons was, at the time, painstaking work. Dorkenwald and his fellow students would manually follow the path of a single cell as it passed across adjacent microscope images, noting each branch point to return to for further mapping.

Today, the process has accelerated greatly, with artificial intelligence (AI) tools taking over most of the work. But those tools make mistakes, and it’s up to humans to find and correct them.

Dorkenwald encountered this obstacle as a graduate student in Seung’s lab at Princeton, where he studied computer science and neuroscience. Before FlyWire, the lab was part of a collaborative effort called the MICrONS consortium, which included teams at the Allen Institute and Baylor College of Medicine, that aimed to map all the connections within a cubic millimeter of the mouse visual cortex. Size alone made this a daunting task: a cubic millimeter of a mouse brain is ten times the size of a fly brain. Dorkenwald and colleagues developed the infrastructure the team needed to proofread and analyze the same dataset.

Their system, which they call CAVE (Connector Annotation Versioning Engine), allowed the team to expand its proofreading community far beyond the three labs who drove the project, involving many neuroscientists who were interested in different parts of the circuitry. “We basically opened up this dataset to anybody who wanted to join,” Dorkenwald says. When they later deployed CAVE to enable community-wide proofreading for the fly connectome, citizen scientists got involved, and paid proofreaders joined the mix to fill in gaps in the map. It has since become an essential tool in the connectomics field.

The MICrONS consortium ultimately reconstructed more than a half billion synapses in that cubic millimeter of mouse tissue. What’s more, researchers added another level of information to the map, incorporating data on neuronal activity recorded from the very mouse whose brain had been imaged for the project enabling new studies that relate a circuit structure with its function. These results, published earlier this year, represent another milestone for the field.

An image of an orange neuron emerging from black and white brain slices.
A single neuron reconstructed from thousands of serial section electron microscope images of the mouse visual cortex for the MICrONS consortium. Image: Sven Dorkenwald

Dorkenwald says this newly mapped piece of the mouse connectome is large enough that scientists can begin to see and analyze neural circuits. Still, zeroing in on a cubic millimeter within the mouse’s pea-sized brain means most of what’s visible is parts of cells, which can leave scientists struggling to identify exactly what they’re looking at. Dorkenwald says bits of cells can reveal their identities with their particular shapes and ultrastructural contents, such as vesicles and mitochondria. However, humans can’t necessarily make sense of these subtle features on their own. An AI tool that he developed called SegCLR (segmentation-guided contrastive learning of representations) decodes these clues.

SegCLR is one way Dorkenwald is applying his computational expertise to make sense of connectomes and integrate new kinds of information into the maps — work that he continued as a fellow at the Allen Institute after earning his PhD at Princeton.

“A connectome alone is not enough,” he says. “If you would just look at a connectome of a brain, it would look like white noise at first. You have to put order into the system to understand its parts.”

Searching for meaning

In January 2026, Dorkenwald will join MIT as an assistant professor of brain and cognitive sciences and an investigator at the McGovern Institute. He will be digging into the connectomes he has helped produce, developing new computational approaches to look for organizational principles within the circuitry. “We will be asking hard questions about the circuits we reconstruct,” he says. “The connections that we are seeing contribute to interesting and important computations. What are the circuit motifs that allow them to do that? What’s the architecture of the circuit within layers, across layers, and ultimately, across regions? That is what I want to get at.”

An infographic comparing the fruit fly brain to the mouse brain.

While there’s plenty of data to work with, he’s also eager to continue scaling up connectomics. He thinks a complete connectome of the mouse brain is achievable within 10 to 15 years — but it’s going to require a lot of collaboration. “The area we’re working in is still very new,” he says. “There’s a lot of room to approach things in new ways and solve problems that are very large, in ways that move an entire field forward.”

As the technology advances, Dorkenwald plans to compare connectomes across individuals to better understand variations in circuitry, including the changes that occur in individuals with neurological or psychiatric disorders.

To help make that possible, he’s going to design new AI approaches to automate proofreading, which remains a bottleneck for connectomics. Even a community-wide effort will be too slow to manually proofread a map of the entire mouse brain, so this step will also need to be automated. For this, Dorkenwald will turn to data from past proofreaders, who have already made millions of manual edits to connectomes. Dorkenwald plans to train AI tools to mimic their work.

Dorkenwald says his career in connectomics began with a sense of wonder, back when he was tracing neurons through images of the zebra finch brain. “Every time you asked about what is in there, and nobody knew, there was so much that felt undiscovered,” he remembers. Now, he’s making all the information hidden within those images more accessible: “If we can just extract it, I think we can make sense of it.”

Celebrating worm science

For decades, scientists with big questions about biology have found answers in a tiny worm. That worm–a millimeter-long creature called Caenorhabditis elegans–has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel prizes and have led to the development of new treatments for human disease.

Portrait of Robert Horvitz at a computer.
McGovern Investigator Robert Horvitz shared the 2002 Nobel Prize in Medicine with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Photo: AP Images/Aynsley Floyd

In a perspective piece published in the November 2025 issue of the journal PNAS, eleven biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health and highlight how a uniquely collaborative community among worm researchers has fueled the field.

MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andy Fire and Paul Sternberg, now at Stanford University and the California Institute of Technology, and two past postdoctoral researchers in Horvitz’s lab, University of Massachusetts Medical School professor Victor Ambros and Massachusetts General Hospital investigator Gary Ruvkun. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.

Early worm discoveries

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Medicine along with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Horvitz is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research as well as an investigator at the Howard Hughes Medical Institute.

Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.

Microscopic image of C. elegans roundworm with cells highlighted in pink and green.
Caenorhabditis elegans, a transparent roundworm only 1mm in length, has provided answers to many fundamental questions in biology. Image: Robert Horvitz

Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans. “Many aspects of biology are ancient and evolutionarily conserved,” Horvitz explains. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”

In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.

In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood and are instead eliminated by a process termed programmed cell death.

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz.

By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.

In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.

Collaborative worm community

Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists—including many who trained in Horvitz’s lab—has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:

  • Andrew Fire and Craig Mello, whose discovery of an RNA-based system of gene silencing led to powerful new tools to manipulate gene activity. The innate process they discovered in worms, known as RNA interference, is now used as the basis of six FDA-approved therapeutics for genetic disorders, silencing faulty genes to stop their harmful effects.
  • Martin Chalfie, who used a fluorescent protein made by jellyfish to visualize and track specific cells in C. elegans, helping launch the development of a set of tools that transformed biologists’ ability to observe molecules and processes that are important for both health and disease.
  • Victor Ambros and Gary Ruvkun, who discovered a class of molecules called microRNAs that regulate gene activity not just in worms, but in all multicellular organisms. This prize-winning work was started when Ambros and Ruvkun were postdoctoral researchers in Horvitz’s lab. Humans rely on more than 1,000 microRNAs to ensure our genes are used at the right times and places. Disruptions to microRNAs have been linked to neurological disorders, cancer, cardiovascular disease, and autoimmune disease, and researchers are now exploring how these small molecules might be used for diagnosis or treatment.

Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.

Today, scientists who study C. elegans—whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems—contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.

WormAtlas provides users with numerous anatomical resources including tools to view electron microscopy slices of the same cell. Image: WormAtlas.org

Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.

C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.

Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees. As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.

 

Who discovered neurons?

A self-portrait of Santiago Ramón y Cajal looking through a microscope.
A self-portrait of Santiago Ramón y Cajal looking through a microscope. Image: CC 2.0

On this day, December 10th, nearly 120 years ago, Santiago Ramón y Cajal received a Nobel Prize for capturing and interpreting the very first images of the brain’s most essential components — neurons.

“Many scientists consider Cajal the progenitor of neuroscience because he was the first to really see the brain for what it was: a computational engine made up of individual units,” says Mark Harnett, an investigator at the McGovern Institute and an associate professor in the Department of Brain and Cognitive Sciences. His lab explores how the biophysical features of neurons enable them to perform complex computations that drive thought and behavior.

For Harnett, Cajal is one of the greatest scientific minds to have helped us understand ourselves and our place in the world. Cajal was the first to uncover what neurons look like and propose how they function — equipping the field to solve a slew of the mind’s mysteries. Scientists built on this framework to learn how these remarkable cells relay information — by zapping electrical signals to each other — so we can think, feel, move, communicate, and create.

From art to science and back again

Cajal was born on May 1, 1852, in a small village nestled in the Spanish countryside. It was there Cajal fell deeply and madly in love with … art. But his father was a physician, and urged him to trade his sketches for a scalpel. Begrudgingly, Cajal eventually did. After graduating from medical school in 1873, he worked as an army doctor, but around 1880, he turned his attention to studying the nervous system.

An illustration of a brain cell.
A Purkinje neuron from the human cerebellum. Image: Cajal Institute (CSIC), Madrid

Nineteenth-century scientists didn’t think of the brain as a network of cells but more as plumbing, like the blood vessels in the circulatory system — a series of hollow tubes through which information somehow flowed. Cajal and others were skeptical of this perspective, yet had no way of visualizing the brain at a detailed, cellular level to confirm their suspicions. Scientists at the time stained thin slices of tissue to make cells visible under a microscope, but even the most sophisticated methods stained all cells at once, leaving an indecipherable mass under the microscope’s lens.

This changed in 1887 when Cajal encountered a technique devised by Camillo Golgi that stained only some cells. “Rather than seeing all the cells simultaneously, you saw one at a time,” Harnett explains, making it easier to view a cell’s precise form (Golgi shared the 1906 Nobel Prize with Cajal for this method). If he could refine Golgi’s approach and apply it to neural tissue, Cajal thought, he might finally determine the brain’s architecture.

When he did, a remarkable landscape appeared — black bulbs with sprawling branches, each casting a stringy silhouette. The scene awakened a prior passion. While viewing brain slices under a microscope, Cajal drew what he saw, with surgical precision and an artist’s eye. He had captured — for the first time — the mind’s timberland of cells.

A new theory of the mind

Cajal’s illustrations revealed that brain cells did not form a singular plumbing network, but were distinctly separate, with small gaps between them. “This completely upended what people at the time thought about the brain,” Harnett explains. “It wasn’t made up of connected tubes, but individual cells,” which a few years later in 1891 would be called neurons. Over nearly five decades Cajal created around 2,900 drawings — a collage of neurons from humans and a menagerie of fauna: mice, pigeons, lizards, newts, and fish — spanning a host of cell types, from Purkinje cells to basket and chandelier interneurons.

“Part of Cajal’s genius was that he proposed what the incredible anatomical diversity among neurons meant. He reasoned that maybe one part of the cell could work like an antenna to take in signals, and another might be a cable to send signals out. Cajal was already thinking about input and output at neurons, and synapses as points of contact between them,” Harnett notes. “Each neuron becomes a very complex engine for computation, as opposed to tube-based things that can’t really compute.”

Cajal’s notion that the brain was a network of individual cells would come to be known as the neuron doctrine, a bedrock principle that underlies all of neuroscience today. In his autobiography, Cajal describes neurons as “the mysterious butterflies of the soul, the beating of whose wings may someday – who knows? – clarify the secret of mental life.” And in many ways, they have.

One of thousands of neuron illustrations created by Santiago Ramón y Cajal. Image: CC 2.0

One scientist’s enduring influence

Much of scientists’ current approach to studying the brain is guided by Cajal’s blueprint. This is certainly true for the Harnett lab. “As many in the field do, we share Cajal’s aspiration to apply cutting-edge imaging to reveal hidden aspects of the brain and hypothesize about their function,” Harnett says. “Thankfully, unlike Cajal, we now have the advantage of functional tests to try to validate our hypotheses.”

An ultra high resolution image of a neuron taken by the Harnett lab. Image: Mark Harnett

In a study published in 2022, the Harnett lab used a super-resolution imaging tool to find that filopodia — tiny structures that protrude from dendrites (the signal-receiving “antennas” of neurons) — were far more abundant in the brain than previously thought. Through a battery of tests, they found that these “silent synapses” can become active to facilitate new neural connections. Such pliable sites were believed to only be present very early in life, but the researchers observed filopodia in adult mice, suggesting that they support continuous learning and computational flexibility over the lifespan.

Harnett explains that Cajal’s impact extends beyond neuroscience. “Where does the power of artificial intelligence (AI) come from? It comes, originally, from Cajal.” It’s no wonder, he says, that AI uses neural networks — a mimicry of one of nature’s most powerful designs, first described by Cajal. “The idea that neurons are computational units is really critical to the power and complexity you can achieve within a network. Cajal even hypothesized that changing the strength of signaling between neurons was how learning worked, an idea that was later validated and became one of the critical insights for revolutionizing deep learning in AI.”

By unveiling what’s really happening beneath our skulls, Cajal’s work would both motivate and guide studies of the brain for over a hundred years to come. “Many of his early hypotheses have proven to be true decades and decades later,” Harnett says. “He has and continues to inspire generations of neuroscientists.”

 

 

When it comes to language, context matters

In everyday conversation, it’s critical to understand not just the words that are spoken, but the context in which they are said. If it’s pouring rain and someone remarks on the “lovely weather,” you won’t understand their meaning unless you realize that they’re being sarcastic.

Making inferences about what someone really means when it doesn’t match the literal meaning of their words is a skill known as pragmatic language ability. This includes not only interpreting sarcasm but also understanding metaphors and white lies, among many other conversational subtleties.

Portrait of McGovern Investigator Evelina Fedorenko in a black shirt with soft white lights in background. Photo: Alexandra Sokhina
McGovern Investigator Evelina Fedorenko. Photo: Alexandra Sokhina

“Pragmatics is trying to reason about why somebody might say something, and what is the message they’re trying to convey given that they put it in this particular way,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

New research from Fedorenko and her colleagues has revealed that these abilities can be grouped together based on what types of inferences they require. In a study of 800 people, the researchers identified three clusters of pragmatic skills that are based on the same kinds of inferences and may have similar underlying neural processes.

One of these clusters includes inferences that are based on our knowledge of social conventions and rules. Another depends on knowledge of how the physical world works, while the last requires the ability to interpret differences in tone, which can indicate emphasis or emotion.

Fedorenko and Edward Gibson, an MIT professor of brain and cognitive sciences, are the senior authors of the study, which appears today in the Proceedings of the National Academy of Sciences. The paper’s lead authors are Sammy Floyd, a former MIT postdoc who is now an assistant professor of psychology at Sarah Lawrence College, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor of cognitive science at Carleton University.

The importance of context

Much past research on how people understand language has focused on processing the literal meanings of words and how they fit together. To really understand what someone is saying, however, we need to interpret those meanings based on context.

“Language is about getting meanings across, and that often requires taking into account many different kinds of information — such as the social context, the visual context, or the present topic of the conversation,” Fedorenko says.

As one example, the phrase “people are leaving” can mean different things depending on the context, Gibson points out. If it’s late at night and someone asks you how a party is going, you may say “people are leaving,” to convey that the party is ending and everyone’s going home.

“However, if it’s early, and I say ‘people are leaving,’ then the implication is that the party isn’t very good,” Gibson says. “When you say a sentence, there’s a literal meaning to it, but how you interpret that literal meaning depends on the context.”

About 10 years ago, with support from the Simons Center for the Social Brain at MIT, Fedorenko and Gibson decided to explore whether it might be possible to precisely distinguish the types of processing that go into pragmatic language skills.

One way that neuroscientists can approach a question like this is to use functional magnetic resonance imaging (fMRI) to scan the brains of participants as they perform different tasks. This allows them to link brain activity in different locations to different functions. However, the tasks that the researchers designed for this study didn’t easily lend themselves to being performed in a scanner, so they took an alternative approach.

This approach, known as “individual differences,” involves studying a large number of people as they perform a variety of tasks. This technique allows researchers to determine whether the same underlying brain processes may be responsible for performance on different tasks.

To do this, the researchers evaluate whether each participant tends to perform similarly on certain groups of tasks. For example, some people might perform well on tasks that require an understanding of social conventions, such as interpreting indirect requests and irony. The same people might do only so-so on tasks that require understanding how the physical world works, and poorly on tasks that require distinguishing meanings based on changes in intonation — the melody of speech. This would suggest that separate brain processes are being recruited for each set of tasks.

The first phase of the study was led by Jouravlev, who assembled existing tasks that require pragmatic skills and created many more, for a total of 20. These included tasks that require people to understand humor and sarcasm, as well as tasks where changes in intonation can affect the meaning of a sentence. For example, someone who says “I wanted blue and black socks,” with emphasis on the word “black,” is implying that the black socks were forgotten.

“People really find ways to communicate creatively and indirectly and non-literally, and this battery of tasks captures that,” Floyd says.

Components of pragmatic ability

The researchers recruited study participants from an online crowdsourcing platform to perform the tasks, which took about eight hours to complete. From this first set of 400 participants, the researchers found that the tasks formed three clusters, related to social context, general knowledge of the world, and intonation. To test the robustness of the findings, the researchers continued the study with another set of 400 participants, with this second half run by Floyd after Jouravlev had left MIT.

With the second set of participants, the researchers found that tasks clustered into the same three groups. They also confirmed that differences in general intelligence, or in auditory processing ability (which is important for the processing of intonation), did not affect the outcomes that they observed.

In future work, the researchers hope to use brain imaging to explore whether the pragmatic components they identified are correlated with activity in different brain regions. Previous work has found that brain imaging often mirrors the distinctions identified in individual difference studies, but can also help link the relevant abilities to specific neural systems, such as the core language system or the theory of mind system.

This set of tests could also be used to study people with autism, who sometimes have difficulty understanding certain social cues. Such studies could determine more precisely the nature and extent of these difficulties. Another possibility could be studying people who were raised in different cultures, which may have different norms around speaking directly or indirectly.

“In Russian, which happens to be my native language, people are more direct. So perhaps there might be some differences in how native speakers of Russian process indirect requests compared to speakers of English,” Jouravlev says.

The research was funded by the Simons Center for the Social Brain at MIT, the National Institutes of Health, and the National Science Foundation.

Season’s Greetings from the McGovern Institute

This winter, may our connections spark new possibilities for the year ahead.

What makes us who we are? How do billions of neurons working together become our thoughts, feelings, and memories? How do they spark imagination and creativity? By tracing these connections, mapping how each neuron links to another, McGovern scientists are carving a path to uncover how these patterns generate the human experience. Because the intricate networks of neurons we’re working to understand, are the very ones that make understanding possible – empowering us to learn, discover, and create. And by exploring them, we see that being human at every level is about connection.

Happy holidays from your friends at the McGovern Institute!

Video credits:

Glass Ink Media and Julie Pryor (video)
Shepherd + Maudsleigh Studio | Megan Cascella (woodcut artist)

Astrocyte diversity across space and time

McGovern Investigator Guoping Feng. Photo: Justin Knight

When it comes to brain function, neurons get a lot of the glory. But healthy brains depend on the cooperation of many kinds of cells. The most abundant of the brain’s non-neuronal cells are astrocytes, star-shaped cells with a lot of responsibilities. Astrocytes help shape neural circuits, participate in information processing, and provide nutrient and metabolic support to neurons. Individual cells can take on new roles throughout their lifetimes, and at any given time, the astrocytes in one part of the brain will look and behave differently than the astrocytes somewhere else.

After an extensive analysis by scientists at MIT’s McGovern Institute, neuroscientists now have an atlas detailing astrocytes’ dynamic diversity. Its maps depict the regional specialization of astrocytes across the brains of both mice and marmosets—two powerful models for neuroscience research—and show how their populations shift as brains develop, mature, and age. The study, reported in the November 20 issue of the journal Neuron, was led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT. This work was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research, part of the Yang Tan Collective at MIT, and the National Institutes of Health’s BRAIN Initiative.

Probing the unknown

“It’s really important for us to pay attention to non-neuronal cells’ role in health and disease,” says Feng, who is also the associate director of the McGovern Institute, the director of the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and a member of the Broad Institute of MIT and Harvard. And indeed, these cells—once seen as merely supporting players—have gained more of the spotlight in recent years. Astrocytes are known to play vital roles in the brain’s development and function, and their dysfunction seems to contribute to many psychiatric disorders and neurodegenerative diseases. “But compared to neurons, we know a lot less—especially during development,” Feng adds.

Feng and Margaret Schroeder, a former graduate student in his lab, thought it was important to understand astrocyte diversity across three axes: space, time, and species. They knew from earlier work in the lab, done in collaboration with Steve McCarroll’s lab at Harvard and led by Fenna Krienen in his group, that in adult animals, different parts of the brain have distinctive sets of astrocytes.

“The natural question was, how early in development do we think this regional patterning of astrocytes starts?” Schroeder says.

To find out, she and her colleagues collected brain cells from mice and marmosets at six stages of life, spanning embryonic development to old age. For each animal, they sampled cells from four different brain regions: the prefrontal cortex, the motor cortex, the striatum, and the thalamus.

Then, working with Krienen, who is now an assistant professor at Princeton University, they analyzed the molecular contents of those cells, creating a profile of genetic activity for each one. That profile was based on the mRNA copies of genes found inside the cell, which are known collectively as the cell’s transcriptome. Determining which genes a cell is using and how active those genes are gives researchers insight into a cell’s function and is one way of defining its identity.

Dynamic diversity

After assessing the transcriptomes of about 1.4 million brain cells, the group focused in on the astrocytes, analyzing and comparing their patterns of gene expression. At every life stage, from before birth to old age, the team found regional specialization: Astrocytes from different brain regions had similar patterns of gene expression, which were distinct from those of astrocytes in other brain regions.

This regional specialization was also apparent in the distinct shapes of astrocytes in different parts of the brain, which the team was able to see with expansion microscopy, a high-resolution imaging method developed by McGovern colleague Edward Boyden that reveals fine cellular features.

Notably, the astrocytes in each region changed as animals matured. “When we looked at our late embryonic time point, the astrocytes were already regionally patterned. But when we compare that to the adult profiles, they had completely shifted again,” Schroeder says. “So there’s something happening over postnatal development.” The most dramatic changes the team detected occurred between birth and early adolescence, a period during which brains rapidly rewire as animals begin to interact with the world and learn from their experiences.

Maps generated by Feng’s team depict the regional specialization of astrocytes across the brains of both mice and marmosets—two powerful models for neuroscience research—and show how their populations shift as brains develop, mature, and age.

Feng and Schroeder suspect that the changes they observed may be driven by the neural circuits that are sculpted and refined as the brain matures. “What we think they’re doing is kind of adapting to their local neuronal niche,” Schroeder says. “The types of genes that they are upregulating and changing during development points to their interaction with neurons.” Feng adds that astrocytes may change their genetic programs in response to nearby neurons, or alternatively, they might help direct the development or function of local circuits as they adopt identities best suited to support particular neurons.

Both mouse and marmoset brains exhibited regional specialization of astrocytes and changes in those populations over time. But when the researchers looked at the specific genes whose activity defined various astrocyte populations, the data from the two species diverged. Schroeder calls this a note of caution for scientists who study astrocytes in animal models, and adds that the new atlas will help researchers assess the potential relevance of findings across species.

Beyond astrocytes

With a new understanding of astrocyte diversity, Feng says his team will pay close attention to how these cells are impacted by the disease-related genes they study and how those effects change during development. He also notes that the gene expression data in the atlas can be used to predict interactions between astrocytes and neurons. “This will really guide future experiments: how these cells’ interactions can shift with changes in the neurons or changes in the astrocytes,” he says.

The Feng lab is eager for other researchers to take advantage of the massive amounts of data they generated as they produced their atlas. Schroeder points out that the team analyzed the transcriptomes of all kinds of cells in the brain regions they studied, not just astrocytes. They are sharing their findings so researchers can use them to understand when and where specific genes are used in the brain, or dig in more deeply to further to explore the brain’s cellular diversity.

 

The cost of thinking

Large language models (LLMs) like ChatGPT can write an essay or plan a menu almost instantly. But until recently, it was also easy to stump them. The models, which rely on language patterns to respond to users’ queries, often failed at math problems and were not good at complex reasoning. Suddenly, however, they’ve gotten a lot better at these things.

A new generation of LLMs known as reasoning models are being trained to solve complex problems. Like humans, they need some time to think through problems like these—and remarkably, scientists at MIT’s McGovern Institute have found that the kinds of problems that require the most processing from reasoning models are the very same problems that people need take their time with. In other words, they report in the November 18 issue of the journal PNAS, the “cost of thinking” for a reasoning model is similar to the cost of thinking for a human.

The researchers, who were led by McGovern Institute Investigator Evelina Fedorenko, conclude that in at least one important way, reasoning models have a human-like approach to thinking. That, they note, is not by design. “People who build these models don’t care if they do it like humans. They just want a system that will robustly perform under all sorts of conditions and produce correct responses,” Fedorenko says.

“The fact that there’s some convergence is really quite striking.” — Evelina Fedorenko

Reasoning models

Like many forms of artificial intelligence, the new reasoning models are artificial neural networks: computational tools that learn how to process information when they are given data and a problem to solve. Artificial neural networks have been very successful at many of the tasks that the brain’s own neural networks do well—and in some cases, neuroscientists have discovered that those that perform best do share certain aspects of information processing in the brain. Still, some scientists argued that artificial intelligence was not ready to take on more sophisticated aspects of human intelligence.

“Up until recently, I was among the people saying, ‘these models are really good at things like perception and language, but it’s still going to be a long ways off until we have neural network models that can do reasoning,” says Fedorenko, who is also an associate professor of brain and cognitive sciences at MIT. “Then these large reasoning models emerged and they seem to do much better at a lot of these thinking tasks, like solving math problems and writing pieces of computer code.”

Computational neuroscientist Andrea Gregor de Varda is a K. Lisa Yang ICoN Center Fellow and a postdoctoral researcher in Evelina Fedorenko’s lab. Photo: Steph Stevens

Andrea Gregor de Varda, a K. Lisa Yang ICoN Center Fellow and a postdoctoral researcher in Fedorenko’s lab, explains that reasoning models work out problems step by step. “At some point, people realized that models needed to have more space to perform the actual computations that are needed to solve complex problems,” he says. “The performance started becoming way, way stronger if you let the models break down the problems into parts.”

To encourage models to work through complex problems in steps that lead to correct solutions, engineers can use reinforcement learning. During their training, the models are rewarded for correct answers and penalized for wrong ones. “The models explore the problem space themselves,” de Varda says. “The actions that lead to positive rewards are reinforced, so that they produce correct solutions more often.”

Models trained in this way are much more likely than their predecessors to arrive at the same answers a human would when they are given a reasoning task. Their stepwise problem solving does mean reasoning models can take a bit longer to find an answer than the LLMs that came before—but since they’re getting right answers where the previous models would have failed, their responses are worth the wait.

The models’ need to take some time to work through complex problems already hints at a parallel to human thinking: if you demand that a person solve a hard problem instantaneously, they’d probably fail too. De Varda wanted to examine this relationship more systematically. So he gave reasoning models and human volunteers the same set of problems, and tracked not just whether they got the answers right, but also how much time or effort it took them to get there.

Time vs. tokens

This meant measuring how long it took people to respond to each question, down to the millisecond. For the models, Varda used a different metric. It didn’t make sense to measure processing time, since this is more dependent on computer hardware than the effort the model puts into solving a problem. So instead, he tracked tokens, which are part of a model’s internal chain of thought. “They produce tokens that are not meant for the user to see and work on, but just to have some track of the internal computation that they’re doing,” de Varda explains.

“It’s as if they were talking to themselves.” — Andrea Gregor de Varda

Both humans and reasoning models were asked to solve seven different types of problems, like numeric arithmetic and intuitive reasoning. For each problem class, they were given many problems. The harder a given problem was, the longer it took people to solve it—and the longer it took people to solve a problem, the more tokens a reasoning model generated as it came to its own solution.

Likewise, the classes of problems that humans took longest to solve were the same classes of problems that required the most tokens for the models: arithmetic problems were the least demanding, whereas a group of problems called the “ARC challenge,” where pairs of colored grids represent a transformation that must be inferred and then applied to a new object, were the most costly for both people and models.

De Varda and Fedorenko say the striking match in the costs of thinking demonstrates one way in which reasoning models are thinking like humans. That doesn’t mean the models are recreating human intelligence, though. The researchers still want to know whether the models use similar representations of information to the human brain, and how those representations are transformed into solutions to problems. They’re also curious whether the models will be able to handle problems that require world knowledge that is not spelled out in the texts that are used for model training.

The researchers point out that even though reasoning models generate internal monologues as they solve problems, they are not necessarily using language to think. “If you look at the output that these models produce while reasoning, it often contains errors or some nonsensical bits, even if the model ultimately arrives at a correct answer. So the actual internal computations likely take place in an abstract, non-linguistic representation space, similar to how humans don’t use language to think,” he says.

Different bodies, similar strategies to maintain balance

Nidhi Seethapathi is an associate investigator at the McGovern Institute as well as the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT.

With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.

Now, scientists at MIT’s McGovern Institute have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.

McGovern Associate Investigator Nidhi Seethapathi and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published October 21, 2025, in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion – bridging the gap between animal models and human balance.

Corrective action

Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.

“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT.

While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.

Antoine DeComite is a K. Lisa Yang ICoN Postdoctoral Fellow in Nidhi Seethapathi’s lab at the McGovern Institute. Photo: Steph Stevens

To find out, Seethapathi and De Comite, who is a postdoctoral research in both Seethapathi’s and Guoping Feng’s labs, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species which is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room—not on a treadmill or over unusual terrain.

Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.

By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation—at any given moment—is the error.”

“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite.

“The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”

The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.

Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.

Studying how brains help animals move stably may also guide the development of more targeted strategies to help people improve their balance and, ultimately, prevent falls.

“In elderly individuals and individuals with sensorimotor disorders , minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says.