How the brain handles the “cocktail party problem”

MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.

This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.

Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.

“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.

Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.

Modeling attention

Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.

When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.

“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”

This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.

“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.

In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.

To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.

To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.

“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.

Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.

The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.

“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.

Effects of location

Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.

The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.

“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.

These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.

“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.

Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.

The research was funded by the National Institutes of Health.

 

Liqun Luo named winner of the 2026 Scolnick Prize in Neuroscience

Today, Stanford University neuroscientist Liqun Luo was announced as the recipient of the 2026 Edward M. Scolnick Prize in Neuroscience by the McGovern Institute for Brain Research at MIT. Luo is the Ann and Bill Swindells Professor in the School of Humanities and Sciences, Professor of Biology, and Professor of Neurobiology by courtesy at Stanford University, and a Howard Hughes Medical Institute Investigator. The McGovern Institute presents the Scolnick Prize annually to recognize outstanding achievements in neuroscience.

“Liqun Luo’s development of first-in-kind genetic tools and detailed, innovative experimentation has succeeded in defining rules that govern how transient cell-cell contacts ultimately establish functional neural circuits in the developing brain,” says McGovern Institute Director Robert Desimone, who is also chair of the selection committee. “Luo’s methodologies for visualizing specific subsets of neurons based on their developmental trajectory or their activity are widely used in the field and have driven the identification of neurons responsible for a range of behaviors, including sleep and social interactions.”

Liqun Luo was born in Shanghai, China and attained his bachelor’s degree in molecular biology from the University of Science and Technology of China in 1986. He moved to the US for graduate studies at Brandeis University with Kalpana White, where he characterized the homolog of the Alzheimer’s amyloid precursor protein in the fruit fly Drosophila. After receiving a PhD in 1992, he moved to the University of California, San Francisco for postdoctoral training with Lily Jan and Yuh-Nung Jan where he published a number of papers about how small GTPase proteins regulate cellular morphology. Luo descends from a line of mentors trained by his scientific hero Seymour Benzer, who is widely known for founding the field of neurogenetics.

In 1996, Luo joined the faculty at Stanford University and established his own research group to focus on the molecular mechanisms of neuronal morphogenesis in the brain. Luo’s laboratory developed groundbreaking techniques—including Mosaic Analysis with a Repressible Cell Marker (MARCM) in fruit flies and Mosaic Analysis with Double Markers (MADM) in mice—that allowed the labeling and genetic manipulation of individual neurons within otherwise normal brains. These innovations gave researchers the ability to image genetically defined and altered neurons as they grow, connect, and change over time. Luo and his colleagues used these tools to reveal how neurons sculpt their branching structures, prune away unnecessary connections, and find the precise partners they need to form functional circuits. His work illuminated the molecular choreography that ensures each neuron wires into the correct network—an essential step in building circuits for sensation, movement, memory, and emotion. Another impactful innovation from Luo’s group, known as TRAP (Targeted Recombination in Active Populations), allows for the genetic tagging of neurons that are active during specific experiences. This technique has helped reveal how neural populations encode thirst, motivation, and long-term memories.

Most recently, Luo and his group have wholly defined the molecular codes that neurons use to recognize their correct partners in the olfactory system of fruit flies. His research demonstrated that a combinatorial pattern of cell-surface proteins precisely guides neurons to connect to one another and form a functional network. His team then succeeded in genetically altering the molecular cues that govern synaptic connections to rewire a neural circuit and produce a predicted change in the fly’s mating behavior.

Colleagues emphasize that Luo’s influence extends far beyond his own discoveries. Many of the molecular principles he has uncovered in simple model organisms have since proven to be conserved across species, underscoring their fundamental importance. His genetic tracing methods have been adopted by laboratories worldwide and applied not only in neuroscience but also in fields such as cancer biology, where tracing cell lineage is critical. He has also trained a generation of neuroscientists who have gone on to lead major research programs of their own, amplifying his impact across the field.

Luo has received numerous honors, including election to the National Academy of Sciences, the NAS Award in the Neurosciences, the Pradel Research Award, and the Society for Neuroscience’s Award for Education in Neuroscience. He has been a Howard Hughes Medical Institute Investigator since 2005. He is also the author of Principles of Neurobiology, a widely used textbook that has been translated into Chinese, Japanese, and Italian.

The Scolnick Prize recognizes discoveries that advance the understanding of the brain and its disorders. Luo’s work exemplifies this mission, providing tools and conceptual frameworks for understanding how neural circuits form and are refined to become functional, and how mutations disrupt these processes. As neuroscience enters an era defined by increasingly precise control over brain circuits, Liqun Luo’s contributions stand as both enabling and visionary.

The McGovern Institute will award the Scolnick Prize to Luo on June 16, 2026. At 4:00 pm he will deliver a lecture titled “Wiring Specificity of Neural Circuits” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.

Neurons receive precisely tailored teaching signals as we learn

Man seated on staircase, smiling at camera
McGovern Investigator Mark Harnett. Photo: Adam Glanzman

When we learn a new skill, the brain has to decide—cell by cell—what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.

The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A longstanding question has been whether the brain also uses that kind of individualized feedback. In a study published in the February 25 issue of the journal Nature, MIT researchers report evidence that it does.

A research team led by Mark Harnett, a McGovern Institute investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.

The changing brain

Our brains are constantly changing as we interact with the world, modifying their circuitry as we learn and adapt. “We know a lot from 50 years of studies that there are many ways to change the strength of connections between neurons,” Harnett says. “What the field really lacks is a way of understanding how those changes are orchestrated to actually produce efficient learning.”

Some actions—and the neural connections that enable them—are reinforced with the release of neuromodulators like dopamine or norepinephrine in the brain. But those signals are broadcast to large groups of neurons, without discriminating between cells’ individual contributions to a failure or a success. “Reinforcement learning via neuromodulators works, but it’s inefficient, because all the neurons and all the synapses basically get only one signal,” Harnett says.

Machine learning uses an alternative, and extremely powerful, way to learn from mistakes. Using a method called backpropagation, artificial neural networks compute an error signal and use it to adjust their individual connections. They do this over and over, learning from experience how to fine-tune their networks for success. “It works really well and it’s computationally very effective,” Harnett says.

It seemed likely that brains might use similar error signals for learning. But neuroscientists were skeptical that brains would have the precision to send tailored signals to individual neurons due to the constraints imposed by using living cells and circuits instead of software and equations. A major problem for testing this idea was how to find the signals that provide personalized instructions to neurons, which are called vectorized instructive signals. The challenge, explains Valerio Francioni, first author of the Nature paper and a former postdoctoral researcher in Harnett’s lab, is that scientists don’t know how individual neurons contribute to specific behaviors.

“If I was recording your brain activity while you were learning to play piano,” Francioni explains, “I would learn that there is a correlation between the changes happening in your brain and you learning piano. But if you asked me to make you a better piano player by manipulating your brain activity, I would not be able to do that, because we don’t know how the activity of individual neurons map to that ultimate performance.”

Without knowing which neurons need to become more active and which ones should be reined in, it is impossible to look for signals directing those changes.

Brain-computer interface

To get around this problem, Harnett’s team developed a brain-computer interface task to directly link neural activity and reward outcome – akin to linking the keys of the piano directly to the activity of single neurons. To succeed at the task, certain neurons needed to increase their activity, whereas others were required to decrease their activity.

They set up a BCI to directly link activity in those neurons—just eight to ten of the millions of neurons in a mouse’s brain—to a visual readout, providing sensory feedback to the mice about their performance. Success was accompanied by delivery of a sugary reward.

“Now if you ask me, ‘How does the mouse get more rewards? Which neuron do you have to activate and which neuron do you have to inhibit?’ I know exactly what the answer to that question is,” says Francioni, whose work was supported by a Y. Eva Tan Fellowship from the Yang Tan Collective at MIT.

The scientists didn’t know the exact function of the particular neurons they linked to the BCI, but the cells were active enough that mice received occasional rewards whenever the signals happened to be right. Within a week, mice learned to switch on the right neurons while leaving the other set of neurons inactive, earning themselves more rewards.

Francioni monitored the target neurons daily during this learning process using a powerful microscope to visualize fluorescent indicators of neural activity. He zeroed in on the neurons’ branching dendrites, where the appropriate feedback signals have long been suspected to arrive. At the same time, he tracked activity in the parent cell bodies of those neurons. The team used these data to examine the relationship between signals received at a neuron’s dendrites and its activity, as well as how these changed when mice were rewarded for activating the right neurons or when they failed at their task.

Vectorized neural signals

They concluded that the two groups of neurons whose activity controlled the BCI in opposite ways, also received opposing error signals at their dendrites as the mice learned. Some were told to ramp up their activity during the task, while others were instructed to dial it down. What’s more, when the team manipulated the dendrites to inhibit these instructive signals, mice failed to learn the task. “This is the first biological evidence that vectorized [neuron-specific] signal-based instructive learning is taking place in the cortex,” Harnett says.

The discovery of vectorized signals in the brain—and the team’s ability to find them—should promote more back and forth between neuroscientists and machine learning researchers, says postdoctoral researcher Vincent Tang. “It provides further incentive for the machine learning community to keep developing models and proposing new hypotheses along this direction,” he says. “Then we can come back and test them.”

The researchers say they are just as excited about applying their approach to future experiments as they are about their current discovery.

“Machine learning offers a robust, mathematically tractable way to really study learning. The fact that we can now translate at least some of this directly into the brain is very powerful,” Francioni says.

Harnett says the approach opens new opportunities to investigate possible parallels between the brain and machine learning. “Now we can go after figuring out, how does cortex learn? How do other brain regions learn? How similar or how different is it to this particular algorithm? Can we figure out how to build better, more brain-inspired models from what we learn from the biology?” he says. “This feels like a really big new beginning.”

Feng Zhang inducted into the National Inventors Hall of Fame

Fifteen innovation pioneers, including McGovern Investigator Feng Zhang, have been inducted into the 2026 class of the National Inventors Hall of Fame. Zhang is being recognized for his innovations in gene editing and for sharing his resources and expertise broadly with the global scientific community.

In addition to his appointment at the McGovern Institute, Zhang is the James and Patricia Poitras Professor of Neuroscience at MIT and has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering. He is also an investigator at the McGovern Institute for Brain Research at MIT, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

“The National Inventors Hall of Fame is committed to illuminating the legacies of world-changing inventors and creating opportunities for the next generation to learn from these innovative role models,” said Monica Jones, Chief Executive Officer of the National Inventors Hall of Fame. “The inventors in our 2026 class have made contributions in fields as varied as semiconductor technology and portable inhalers. Induction into the Hall of Fame honors the significance of these advances, which have enhanced our daily lives and well-being.”

Zhang has invented transformative technologies to improve human health, including first demonstrating the use of engineered CRISPR-Cas9 systems for genome editing in human cells. He has co-founded several companies to commercialize these technologies. Through the nonprofit repository Addgene, by 2023 over 75,000 samples of Zhang’s reagents had been shared with researchers in more than 79 countries. He also has trained scientists from around the world in online research forums, in his workshops and in his lab.

“My mother would always emphasize that I should choose to do something useful for the world; to live a life that is meaningful and is adding something to the world, rather than just consuming from the world,” Zhang says. “That has been one of the strongest guiding factors for me.”

In partnership with the United States Patent and Trademark Office (USPTO), the Hall of Fame will honor Zhang and the other 2026 inductees on May 7 at an event in Washington DC.

Language processing beyond the neocortex

The cerebellum, highlighted in red. Image: Anatomography maintained by Life Science Databases(LSDB).

The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.

Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT’s Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported January 21, 2026, in the journal Neuron.

“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”

Imaging the language network

There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved or tease out their roles in language processing.

To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.

Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.

Satellite language network

While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.

Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex—a function that could be important for many cognitive tasks.

“We’ve found that language is distinct from many, many other things—but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”

The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.

Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.

The researchers are also exploring the possibility that the cerebellum is particularly important for language learning—playing an outsized role during development or when people learn languages later in life.

Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says. Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.

Unpacking social intelligence

Experience is a powerful teacher—and not every experience has to be our own to help us understand the world. What happens to others is instructive, too. That’s true for humans as well as for other social animals. New research from scientists at the McGovern Institute shows what happens in the brains of monkeys as they integrate their observations of others with knowledge gleaned from their own experience.

“The study shows how you use observation to update your assumptions about the world,” explains McGovern Institute Investigator Mehrdad Jazayeri, who led the research. His team’s findings, published in the January 7 issue of the journal Nature, also help explain why we tend to weigh information gleaned from observation and direct experience differently when we make decisions. Jazayeri is also a professor of brain and cognitive sciences at MIT and an investigator at the Howard Hughes Medical Institute.

“As humans, we do a large part of our learning through observing other people’s experiences and what they go through and what decisions they make,” says Setayesh Radkani, a graduate student in Jazayeri’s lab. For example, she says, if you get sick after eating out, you might wonder if the food at the restaurant was to blame. As you consider whether it’s safe to return, you’ll likely take into account whether the friends you’d dined with got sick too. Your experiences as well as those of your friends will inform your understanding of what happened.

The research team wanted to know how this works: When we make decisions that draw on both direct experience and observation, how does the brain combine the two kinds of evidence? Are the two kinds of information handled differently?

Social experiment

It is hard to tease out the factors that influence social learning. “When you’re trying to compare experiential learning versus observational learning, there are a ton of things that can be different,” Radkani says. For example, people may draw different conclusions about someone else’s experiences than their own, because they know less about that person’s motivations and beliefs. Factors like social status, individual differences, and emotional states can further complicate these situations and be hard to control for, even in a lab.

To create a carefully controlled scenario in which they could focus on how observation changes our understanding of the world, Radkani and postdoctoral fellow Michael Yoo devised a computer game that would allow two players to learn from one another through their experiences. They taught this game to both humans and monkeys.

Their approach, Jazayeri says, goes far beyond the kinds of tasks that are typically studied in a neuroscience lab. “I think it might be one of the most sophisticated tasks monkeys have been trained to perform in a lab,” he says.

Both monkeys and humans played the game in pairs. The object was to collect enough tokens to earn a reward. Players could choose to enter either of two virtual arenas to play—but in one of the two arenas, tokens had no value. In that arena, no matter how many tokens a player collected, they could not win. Players were not told which arena was which, and the winnable and unwinnable arenas sometimes swapped without warning.

Only one individual played at a time, but regardless of who was playing, both individuals watched all of the games. So as either player collected tokens and either did or did not receive a reward, both the player and the observer got the same information. They could use that information to decide which arena to choose in their next round.

Experience outweighs observation

Humans and monkeys have sophisticated social intelligence and both clearly took their partners’ experiences into account as they played the game. But the researchers found that the outcomes of a player’s own games had a stronger influence on each individual’s choice of arena than the outcomes of their partner’s games. “They seem to learn less efficiently from observation, suggesting they tend to devalue the observational evidence,” Radkani says. That distinction was reflected in the patterns of neural activity that the team detected in the brains of the monkeys.

Postdoctoral fellow Ruidong Chen and research assistant Neelima Valluru recorded signals from a part of the brain’s frontal lobe called the anterior cingulate cortex (ACC) as the monkeys played the game. The ACC is known to be involved in social processing. It also integrates information gained through multiple experiences, and seems to use this to update an animal’s beliefs about the world. Prior to the Jazayeri lab’s experiments, this integrative function had only been linked to animals’ direct experiences—not their observations of others.

Consistent with earlier studies, neurons in the ACC changed their activity patterns both when the monkeys played the game and when they watched their partner take a turn. But these signals were complex and variable, making it hard to discern the underlying logic. To tackle this challenge, Chen recorded neural activity from large groups of neurons in both animals across dozens of experiments. “We also had to devise new analysis methods to crack the code and tease out the logic of the computation,” Chen says.

One of the researchers’ central questions was how information about self and other makes its way to the ACC. The team reasoned that there were two possibilities: either the ACC receives a single input on each trial specifying who is acting, or it receives separate input streams for self and other. To test these alternatives, they built artificial neural network models organized both ways and analyzed how well each model matched their neural data. The results suggested that the ACC receives two distinct inputs, one reflecting evidence acquired through direct experience and one reflecting evidence acquired through observation.

The team also found a tantalizing clue about why the brain tends to trust firsthand experiences more than observations. Their analysis showed that the integration process in the ACC was biased toward direct experience. As a result, both humans and monkeys cared more about their own experiences than the experiences of their partner.

Jazayeri says the study paves the way to deeper investigations of how the brain drives social behavior. Now that his team has examined one of the most fundamental features of social learning, they plan to add additional nuance to their studies, potentially exploring how different abilities or the social relationships between animals influence learning.

“Under the broad umbrella of social cognition, this is like step zero,” he says. “But it’s a really important step, because it begins to provide a basis for understanding how the brain represents and uses social information in shaping the mind.”

This research was supported in part by the Yang Tan Collective at MIT.

New study suggests a way to rejuvenate the immune system

As people age, their immune system function declines. T cell populations become smaller and can’t react to pathogens as quickly, making people more susceptible to a variety of infections.

To try to overcome that decline, researchers at MIT and the Broad Institute have found a way to temporarily program cells in the liver to improve T-cell function. This reprogramming can compensate for the age-related decline of the thymus, where T cell maturation normally occurs.

Using mRNA to deliver three key factors that usually promote T-cell survival, the researchers were able to rejuvenate the immune systems of mice. Aged mice that received the treatment showed much larger and more diverse T cell populations in response to vaccination, and they also responded better to cancer immunotherapy treatments. Their findings are published in the December 17 issue of the journal Nature.

If developed for use in patients, this type of treatment could help people lead healthier lives as they age, the researchers say.

“If we can restore something essential like the immune system, hopefully we can help people stay free of disease for a longer span of their life,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, who has joint appointments in the departments of Brain and Cognitive Sciences and Biological Engineering.

Zhang, who is also an investigator at the McGovern Institute for Brain Research at MIT, a core institute member at the Broad Institute of MIT and Harvard, an investigator in the Howard Hughes Medical Institute, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, is the senior author of the new study. Former MIT postdoc Mirco Friedrich is the lead author of the paper, which appears today in Nature.

A temporary factory

The thymus, a small organ located in front of the heart, plays a critical role in T-cell development. Within the thymus, immature T cells go through a checkpoint process that ensures a diverse repertoire of T cells. The thymus also secretes cytokines and growth factors that help T cells to survive.

However, starting in early adulthood, the thymus begins to shrink. This process, known as thymic involution, leads to a decline in the production of new T cells. By the age of approximately 75, the thymus is greatly reduced.

“As we get older, the immune system begins to decline. We wanted to think about how can we maintain this kind of immune protection for a longer period of time, and that’s what led us to think about what we can do to boost immunity,” Friedrich says.

Previous work on rejuvenating the immune system has focused on delivering T cell growth factors into the bloodstream, but that can have harmful side effects. Researchers are also exploring the possibility of using transplanted stem cells to help regrow functional tissue in the thymus.

The MIT team took a different approach: They wanted to see if they could create a temporary “factory” in the body that would generate the T-cell-stimulating signals that are normally produced by the thymus.

“Our approach is more of a synthetic approach,” Zhang says. “We’re engineering the body to mimic thymic factor secretion.”

For their factory location, they settled on the liver, for several reasons. First, the liver has a high capacity for producing proteins, even in old age. Also, it’s easier to deliver mRNA to the liver than to most other organs of the body. The liver was also an appealing target because all of the body’s circulating blood has to flow through it, including T cells.

To create their factory, the researchers identified three immune cues that are important for T-cell maturation. They encoded these three factors into mRNA sequences that could be delivered by lipid nanoparticles. When injected into the bloodstream, these particles accumulate in the liver and the mRNA is taken up by hepatocytes, which begin to manufacture the proteins encoded by the mRNA.

The factors that the researchers delivered are DLL1, FLT-3, and IL-7, which help immature progenitor T cells mature into fully differentiated T cells.

Immune rejuvenation

Tests in mice revealed a variety of beneficial effects. First, the researchers injected the mRNA particles into 18-month-old mice, equivalent to humans in their 50s. Because mRNA is short-lived, the researchers gave the mice multiple injections over four weeks to maintain a steady production by the liver.

After this treatment, T cell populations showed significant increases in size and function.

The researchers then tested whether the treatment could enhance the animals’ response to vaccination. They vaccinated the mice with ovalbumin, a protein found in egg whites that is commonly used to study how the immune system responds to a specific antigen. In 18-month-old mice that received the mRNA treatment before vaccination, the researchers found that the population of cytotoxic T-cells specific to ovalbumin doubled, compared to mice of the same age that did not receive the mRNA treatment.

The mRNA treatment can also boost the immune system’s response to cancer immunotherapy, the researchers found. They delivered the mRNA treatment to 18-month-old mice, who were then implanted with tumors and treated with a checkpoint inhibitor drug. This drug, which targets the protein PD-L1, is designed to help take the brakes off the immune system and stimulate T cells to attack tumor cells.

Mice that received the treatment showed much higher survival rates and longer lifespan that those that received the checkpoint inhibitor drug but not the mRNA treatment.

The researchers found that all three factors were necessary to induce this immune enhancement; none could achieve all aspects of it on their own. They now plan to study the treatment in other animal models and to identify additional signaling factors that may further enhance immune system function. They also hope to study how the treatment affects other immune cells, including B cells.

Other authors of the paper include Julie Pham, Jiakun Tian, Hongyu Chen, Jiahao Huang, Niklas Kehl, Sophia Liu, Blake Lash, Fei Chen, Xiao Wang, and Rhiannon Macrae.

The research was funded, in part, by the Howard Hughes Medical Institute, the K. Lisa Yang Brain-Body Center, part of the Yang Tan Collective at MIT, Broad Institute Programmable Therapeutics Gift Donors, the Pershing Square Foundation, J. and P. Poitras, and an EMBO Postdoctoral Fellowship.

All the connections

Neuroscientists today have the most spectacular views of brains that the field has ever seen. Modern microscopes can reveal extraordinary levels of detail, offering scientists another piece of the vast and intricate puzzle of how neurons interconnect.

A comprehensive wiring diagram of the brain — its connectome — is an atlas for neuroscientists, guiding investigations into how neural circuitry works. Microscope images are the raw data for generating that atlas, but it takes powerful computers and shrewd scientists, like the McGovern Institute’s newest investigator, Sven Dorkenwald, to make sense of it all.

All 139,255 neurons in the brain of an adult fruit fly reconstructed by the FlyWire Consortium, with each neuron uniquely color-coded. Render by Tyler Sloan. Image: Sven Dorkenwald

A monumental task

Many disorders of the human brain are related to breakdowns that affect the connections of neurons with one another. An atlas will help researchers identify and study the function of those connections — down to the level of synapses — and explore what happens when things go wrong. When researchers understand which brain cells interact with one another, they can ask more sophisticated questions about how those cells work together to process information, store memories, or modulate our emotions.

Until recently, generating a complete connectome for any animal was nearly impossible. Electron microscopes capture fine details of cellular structures, down to the slender branches and tiny protrusions that neurons use to reach out and communicate with one another. But to see those features clearly, microscopes have to zoom way in, focusing solely on a thin slice of one small part of the brain at a time.

Isolated images like these don’t reveal much on their own. They are a jumble of bits and pieces of cells — a cross-section removed from the context of its surroundings. Neurons’ paths must be traced through millions of images to reconstruct the brain’s three-dimensional networks and ultimately, reveal how its individual cells connect with one another. This is a monumental task, because even the poppy seed-sized brain of a fruit fly contains more than 50 million synapses.

The fly connectome
The 50 largest neurons in the adult fruit fly reconstructed by the FlyWire Consortium, spearheaded by Dorkenwald. Image: Sven Dorkenwald, Tyler Sloan

Remarkably, all of those connections in the fruit fly’s tiny brain are now mapped, thanks in large part to Dorkenwald’s efforts as a PhD student at Princeton University. Together with professors Sebastian Seung and Mala Murthy, Dorkenwald spearheaded FlyWire, a consortium of hundreds of scientists who charted the circuitry, following the fly’s neurons through 21 million microscope images. Neuroscientists around the world now use that connectome, which was completed in 2024, to understand how information flows through the fruit fly brain and shed light on parallel processes in our own brains.

AI tools and teamwork

Portrait of Sven Dorkenwald
McGovern Investigator Sven Dorkenwald. Photo: Steph Stevens

Getting from millions of microscope images to a complete wiring diagram of the fly brain required the development of innovative new tools and an extraordinary level of teamwork. Dorkenwald, who was recently named one of STAT’s 2025 Wunderkinds, an award that celebrates outstanding early-career scientists, was instrumental in both.

Dorkenwald’s first experience mapping neural circuits was as a physics undergraduate at Heidelberg University, tracing neurons in a targeted area of a zebra finch brain. The lab wanted a map to help them understand how birds learn and repeat their courtship songs. Tracing neurons was, at the time, painstaking work. Dorkenwald and his fellow students would manually follow the path of a single cell as it passed across adjacent microscope images, noting each branch point to return to for further mapping.

Today, the process has accelerated greatly, with artificial intelligence (AI) tools taking over most of the work. But those tools make mistakes, and it’s up to humans to find and correct them.

Dorkenwald encountered this obstacle as a graduate student in Seung’s lab at Princeton, where he studied computer science and neuroscience. Before FlyWire, the lab was part of a collaborative effort called the MICrONS consortium, which included teams at the Allen Institute and Baylor College of Medicine, that aimed to map all the connections within a cubic millimeter of the mouse visual cortex. Size alone made this a daunting task: a cubic millimeter of a mouse brain is ten times the size of a fly brain. Dorkenwald and colleagues developed the infrastructure the team needed to proofread and analyze the same dataset.

Their system, which they call CAVE (Connector Annotation Versioning Engine), allowed the team to expand its proofreading community far beyond the three labs who drove the project, involving many neuroscientists who were interested in different parts of the circuitry. “We basically opened up this dataset to anybody who wanted to join,” Dorkenwald says. When they later deployed CAVE to enable community-wide proofreading for the fly connectome, citizen scientists got involved, and paid proofreaders joined the mix to fill in gaps in the map. It has since become an essential tool in the connectomics field.

The MICrONS consortium ultimately reconstructed more than a half billion synapses in that cubic millimeter of mouse tissue. What’s more, researchers added another level of information to the map, incorporating data on neuronal activity recorded from the very mouse whose brain had been imaged for the project enabling new studies that relate a circuit structure with its function. These results, published earlier this year, represent another milestone for the field.

An image of an orange neuron emerging from black and white brain slices.
A single neuron reconstructed from thousands of serial section electron microscope images of the mouse visual cortex for the MICrONS consortium. Image: Sven Dorkenwald

Dorkenwald says this newly mapped piece of the mouse connectome is large enough that scientists can begin to see and analyze neural circuits. Still, zeroing in on a cubic millimeter within the mouse’s pea-sized brain means most of what’s visible is parts of cells, which can leave scientists struggling to identify exactly what they’re looking at. Dorkenwald says bits of cells can reveal their identities with their particular shapes and ultrastructural contents, such as vesicles and mitochondria. However, humans can’t necessarily make sense of these subtle features on their own. An AI tool that he developed called SegCLR (segmentation-guided contrastive learning of representations) decodes these clues.

SegCLR is one way Dorkenwald is applying his computational expertise to make sense of connectomes and integrate new kinds of information into the maps — work that he continued as a fellow at the Allen Institute after earning his PhD at Princeton.

“A connectome alone is not enough,” he says. “If you would just look at a connectome of a brain, it would look like white noise at first. You have to put order into the system to understand its parts.”

Searching for meaning

In January 2026, Dorkenwald will join MIT as an assistant professor of brain and cognitive sciences and an investigator at the McGovern Institute. He will be digging into the connectomes he has helped produce, developing new computational approaches to look for organizational principles within the circuitry. “We will be asking hard questions about the circuits we reconstruct,” he says. “The connections that we are seeing contribute to interesting and important computations. What are the circuit motifs that allow them to do that? What’s the architecture of the circuit within layers, across layers, and ultimately, across regions? That is what I want to get at.”

An infographic comparing the fruit fly brain to the mouse brain.

While there’s plenty of data to work with, he’s also eager to continue scaling up connectomics. He thinks a complete connectome of the mouse brain is achievable within 10 to 15 years — but it’s going to require a lot of collaboration. “The area we’re working in is still very new,” he says. “There’s a lot of room to approach things in new ways and solve problems that are very large, in ways that move an entire field forward.”

As the technology advances, Dorkenwald plans to compare connectomes across individuals to better understand variations in circuitry, including the changes that occur in individuals with neurological or psychiatric disorders.

To help make that possible, he’s going to design new AI approaches to automate proofreading, which remains a bottleneck for connectomics. Even a community-wide effort will be too slow to manually proofread a map of the entire mouse brain, so this step will also need to be automated. For this, Dorkenwald will turn to data from past proofreaders, who have already made millions of manual edits to connectomes. Dorkenwald plans to train AI tools to mimic their work.

Dorkenwald says his career in connectomics began with a sense of wonder, back when he was tracing neurons through images of the zebra finch brain. “Every time you asked about what is in there, and nobody knew, there was so much that felt undiscovered,” he remembers. Now, he’s making all the information hidden within those images more accessible: “If we can just extract it, I think we can make sense of it.”

Celebrating worm science

For decades, scientists with big questions about biology have found answers in a tiny worm. That worm–a millimeter-long creature called Caenorhabditis elegans–has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel prizes and have led to the development of new treatments for human disease.

Portrait of Robert Horvitz at a computer.
McGovern Investigator Robert Horvitz shared the 2002 Nobel Prize in Medicine with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Photo: AP Images/Aynsley Floyd

In a perspective piece published in the November 2025 issue of the journal PNAS, eleven biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health and highlight how a uniquely collaborative community among worm researchers has fueled the field.

MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andy Fire and Paul Sternberg, now at Stanford University and the California Institute of Technology, and two past postdoctoral researchers in Horvitz’s lab, University of Massachusetts Medical School professor Victor Ambros and Massachusetts General Hospital investigator Gary Ruvkun. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.

Early worm discoveries

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Medicine along with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Horvitz is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research as well as an investigator at the Howard Hughes Medical Institute.

Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.

Microscopic image of C. elegans roundworm with cells highlighted in pink and green.
Caenorhabditis elegans, a transparent roundworm only 1mm in length, has provided answers to many fundamental questions in biology. Image: Robert Horvitz

Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans. “Many aspects of biology are ancient and evolutionarily conserved,” Horvitz explains. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”

In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.

In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood and are instead eliminated by a process termed programmed cell death.

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz.

By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.

In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.

Collaborative worm community

Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists—including many who trained in Horvitz’s lab—has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:

  • Andrew Fire and Craig Mello, whose discovery of an RNA-based system of gene silencing led to powerful new tools to manipulate gene activity. The innate process they discovered in worms, known as RNA interference, is now used as the basis of six FDA-approved therapeutics for genetic disorders, silencing faulty genes to stop their harmful effects.
  • Martin Chalfie, who used a fluorescent protein made by jellyfish to visualize and track specific cells in C. elegans, helping launch the development of a set of tools that transformed biologists’ ability to observe molecules and processes that are important for both health and disease.
  • Victor Ambros and Gary Ruvkun, who discovered a class of molecules called microRNAs that regulate gene activity not just in worms, but in all multicellular organisms. This prize-winning work was started when Ambros and Ruvkun were postdoctoral researchers in Horvitz’s lab. Humans rely on more than 1,000 microRNAs to ensure our genes are used at the right times and places. Disruptions to microRNAs have been linked to neurological disorders, cancer, cardiovascular disease, and autoimmune disease, and researchers are now exploring how these small molecules might be used for diagnosis or treatment.

Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.

Today, scientists who study C. elegans—whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems—contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.

WormAtlas provides users with numerous anatomical resources including tools to view electron microscopy slices of the same cell. Image: WormAtlas.org

Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.

C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.

Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees. As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.

 

Who discovered neurons?

A self-portrait of Santiago Ramón y Cajal looking through a microscope.
A self-portrait of Santiago Ramón y Cajal looking through a microscope. Image: CC 2.0

On this day, December 10th, nearly 120 years ago, Santiago Ramón y Cajal received a Nobel Prize for capturing and interpreting the very first images of the brain’s most essential components — neurons.

“Many scientists consider Cajal the progenitor of neuroscience because he was the first to really see the brain for what it was: a computational engine made up of individual units,” says Mark Harnett, an investigator at the McGovern Institute and an associate professor in the Department of Brain and Cognitive Sciences. His lab explores how the biophysical features of neurons enable them to perform complex computations that drive thought and behavior.

For Harnett, Cajal is one of the greatest scientific minds to have helped us understand ourselves and our place in the world. Cajal was the first to uncover what neurons look like and propose how they function — equipping the field to solve a slew of the mind’s mysteries. Scientists built on this framework to learn how these remarkable cells relay information — by zapping electrical signals to each other — so we can think, feel, move, communicate, and create.

From art to science and back again

Cajal was born on May 1, 1852, in a small village nestled in the Spanish countryside. It was there Cajal fell deeply and madly in love with … art. But his father was a physician, and urged him to trade his sketches for a scalpel. Begrudgingly, Cajal eventually did. After graduating from medical school in 1873, he worked as an army doctor, but around 1880, he turned his attention to studying the nervous system.

An illustration of a brain cell.
A Purkinje neuron from the human cerebellum. Image: Cajal Institute (CSIC), Madrid

Nineteenth-century scientists didn’t think of the brain as a network of cells but more as plumbing, like the blood vessels in the circulatory system — a series of hollow tubes through which information somehow flowed. Cajal and others were skeptical of this perspective, yet had no way of visualizing the brain at a detailed, cellular level to confirm their suspicions. Scientists at the time stained thin slices of tissue to make cells visible under a microscope, but even the most sophisticated methods stained all cells at once, leaving an indecipherable mass under the microscope’s lens.

This changed in 1887 when Cajal encountered a technique devised by Camillo Golgi that stained only some cells. “Rather than seeing all the cells simultaneously, you saw one at a time,” Harnett explains, making it easier to view a cell’s precise form (Golgi shared the 1906 Nobel Prize with Cajal for this method). If he could refine Golgi’s approach and apply it to neural tissue, Cajal thought, he might finally determine the brain’s architecture.

When he did, a remarkable landscape appeared — black bulbs with sprawling branches, each casting a stringy silhouette. The scene awakened a prior passion. While viewing brain slices under a microscope, Cajal drew what he saw, with surgical precision and an artist’s eye. He had captured — for the first time — the mind’s timberland of cells.

A new theory of the mind

Cajal’s illustrations revealed that brain cells did not form a singular plumbing network, but were distinctly separate, with small gaps between them. “This completely upended what people at the time thought about the brain,” Harnett explains. “It wasn’t made up of connected tubes, but individual cells,” which a few years later in 1891 would be called neurons. Over nearly five decades Cajal created around 2,900 drawings — a collage of neurons from humans and a menagerie of fauna: mice, pigeons, lizards, newts, and fish — spanning a host of cell types, from Purkinje cells to basket and chandelier interneurons.

“Part of Cajal’s genius was that he proposed what the incredible anatomical diversity among neurons meant. He reasoned that maybe one part of the cell could work like an antenna to take in signals, and another might be a cable to send signals out. Cajal was already thinking about input and output at neurons, and synapses as points of contact between them,” Harnett notes. “Each neuron becomes a very complex engine for computation, as opposed to tube-based things that can’t really compute.”

Cajal’s notion that the brain was a network of individual cells would come to be known as the neuron doctrine, a bedrock principle that underlies all of neuroscience today. In his autobiography, Cajal describes neurons as “the mysterious butterflies of the soul, the beating of whose wings may someday – who knows? – clarify the secret of mental life.” And in many ways, they have.

One of thousands of neuron illustrations created by Santiago Ramón y Cajal. Image: CC 2.0

One scientist’s enduring influence

Much of scientists’ current approach to studying the brain is guided by Cajal’s blueprint. This is certainly true for the Harnett lab. “As many in the field do, we share Cajal’s aspiration to apply cutting-edge imaging to reveal hidden aspects of the brain and hypothesize about their function,” Harnett says. “Thankfully, unlike Cajal, we now have the advantage of functional tests to try to validate our hypotheses.”

An ultra high resolution image of a neuron taken by the Harnett lab. Image: Mark Harnett

In a study published in 2022, the Harnett lab used a super-resolution imaging tool to find that filopodia — tiny structures that protrude from dendrites (the signal-receiving “antennas” of neurons) — were far more abundant in the brain than previously thought. Through a battery of tests, they found that these “silent synapses” can become active to facilitate new neural connections. Such pliable sites were believed to only be present very early in life, but the researchers observed filopodia in adult mice, suggesting that they support continuous learning and computational flexibility over the lifespan.

Harnett explains that Cajal’s impact extends beyond neuroscience. “Where does the power of artificial intelligence (AI) come from? It comes, originally, from Cajal.” It’s no wonder, he says, that AI uses neural networks — a mimicry of one of nature’s most powerful designs, first described by Cajal. “The idea that neurons are computational units is really critical to the power and complexity you can achieve within a network. Cajal even hypothesized that changing the strength of signaling between neurons was how learning worked, an idea that was later validated and became one of the critical insights for revolutionizing deep learning in AI.”

By unveiling what’s really happening beneath our skulls, Cajal’s work would both motivate and guide studies of the brain for over a hundred years to come. “Many of his early hypotheses have proven to be true decades and decades later,” Harnett says. “He has and continues to inspire generations of neuroscientists.”