The National Academy of Sciences (NAS) announced today that McGovern Investigator Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions towards understanding the language network in the human brain.
The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.
Fedorenko, who is an associate professor of brain and cognitive sciences at MIT, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.
Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.
She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.
Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington DC.
Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.
A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.
“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.
The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”
MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.
An index of memories
To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.
In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.
“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” says Fiete, who is also the director of the K. Lisa Yang ICoN Center at MIT. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”
Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.
An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.
In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.
“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.
In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.
When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.
“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”
Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.
Modeling memory cliffs and palaces
The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.
While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.
The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.
The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.
The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.
The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.
As we navigate the world, we adapt our movement in response to changes in the environment. From rocky terrain to moving escalators, we seamlessly modify our movements to maximize energy efficiency and our reduce risk of falling. The computational principles underlying this phenomenon, however, are not well understood.
In a recent paper published in the journal Nature Communications, MIT researchers proposed a model that explains how humans continuously adapt yet remain stable during complex tasks like walking.
“Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment,” says senior author Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor of Brain and Cognitive Sciences at MIT. “This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings.”
Barrett Clark, a robotics software engineer at Bright Minds Inc and and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University, are also authors on the paper.
Principles of locomotor adaptation
In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment more complex.
To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.
The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.
The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.
“Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control,” says Seethapathi, who is also an associate investigator at MIT’s McGovern Institute. “You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model could help speed up the process by narrowing the search space.”
When soundwaves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.
Neurons send signals by emitting spikes—brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming soundwaves.
With powerful new models of human hearing, scientists at MIT’s McGovern Institute have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.
The findings, reported December 4, 2024, in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. McGovern Investigator Josh McDermott, who led the research, explains that his team’s models better equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.
Science of sound
The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Soundwaves oscillate at rates that determine their pitch: low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that corresponds to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also an associate professor of brain and cognitive sciences at MIT.
This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.
This has been difficult to study experimentally: Animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler turned to artificial neural networks.
Artificial hearing
Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.
To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.
The researchers showed that their model replicated human hearing well—better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.
“The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors.” – Josh McDermott
When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.
The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.
“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways—it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”
One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.
“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute.
In the August 27, 2024 of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.
Visual connections
Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.
Postdoctoral fellow Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).
The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells—a tight band within what she describes as the trunk of the dendritic tree.
Yaeger found several ways in which synapses in this region— formally known as the apical oblique dendrite domain—differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.
Stable synapses
In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”
The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”
When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.
That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.
“These synapses are basically a robust, high fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”
“You actually don’t want those to be plastic,” adds Yaeger.
“Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” – Courtney Yaeger
By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize—further evidence that the transition depends on visual experience.
The team’s findings not only help explain how the brain balances flexibility and stability, they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: When an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.
When you arrive in a new city, every outing can be an exploration. You may know your way to a few places, but only if you follow a specific route. As you wander around a bit, get lost a few times, and familiarize yourself with some landmarks and where they are relative to each other, your brain develops a cognitive map of the space. You learn how things are laid out, and navigating gets easier.
It takes a lot to generate a useful mental map. “You have to understand the structure of relationships in the world,” says McGovern Investigator Mehrdad Jazayeri. “You need learning and experience to construct clever representations. The advantage is that when you have them, the world is an easier place to deal with.”
Indeed, Jazayeri says, internal models like these are the core of intelligent behavior.
Many McGovern scientists see these cognitive maps as windows into their biggest questions about the brain: how it represents the external world, how it lets us learn and adapt, and how it forms and reconstructs memories. Researchers are learning that cells and strategies that the brain uses to understand the layout of a space also help track other kinds of structures in the world, too — from variations in sound to sequences of events. By studying how neurons behave as animals navigate their environments, McGovern researchers also expect to deepen their understanding of other important cognitive functions as well.
Decoding spatial maps
McGovern Investigator Ila Fiete builds theoretical models that help explain how spatial maps are formed in the brain. Previous research has shown that “place cells” and “grid cells” are place-sensitive neurons in the brain’s hippocampus and entorhinal cortex whose firing patterns help an animal map out a space. As an animal becomes familiar with its environment, subsets of these cells become tied to specific locations, firing only when the animal is in them.
Fiete’s models have shown how these circuits can integrate information about movement, like signals from the muscles and vestibular system that change as an animal moves around, to calculate and update its estimate of an animal’s position in space. Fiete suspects the cells that do this can use the same strategy to keep track of other kinds of movement or change.
Mapping a space is about understanding where things are in relationship to one another, says Jazayeri, and tracking relationships is useful for modeling many kinds of structure in the world. For example, the hippocampus and entorhinal cortex are also closely linked to episodic memory, which keeps track of the connections between events and experiences.
“These brain areas are thought to be critical for learning relationships,” Jazayeri says.
Navigating virtual worlds
A key feature of cognitive maps is that they enable us to make predictions and respond to new situations without relying on immediate sensory cues. In a study published in Nature this June, Jazayeri and Fiete saw evidence of the brain’s ability to call up an internal model of an abstract domain: they watched neurons in the brain’s entorhinal cortex register a sequence of images, even when they were hidden from view.
We can remember the layout of our home from far away or plan a walk through the neighborhood without stepping outside — so it may come as no surprise that the brain can call up its internal model in the absence of movement or sensory inputs. Indeed, previous research has shown that the circuits that encode physical space also encode abstract spaces like auditory sound sequences. But these experiments were performed in the presence of the stimuli, and Jazayeri and his team wanted to know whether simply imagining movement through an abstract domain may also evoke the same cognitive maps.
To test the entorhinal cortex’s ability to do this, Jazayeri and his team designed an experiment where animals had to “mentally” navigate through a previously explored, but now invisible, sequence of images. Working with Fiete, they found that the neurons that had become responsive to particular images in the visible sequence would also fire when mentally navigating the sequence in which images were hidden from view — suggesting the animal was conjuring a representation of the image in its mind.
“You see these neurons in the entorhinal cortex undergo very clear dynamic patterns that are in correspondence with what we think the animal might be thinking at the time,” Jazayeri says. “They are updating themselves without any change out there in the world.”
The team then incorporated their data into a computational model to explore how neural circuits might form a mental model of abstract sequences. Their artificial circuit showed that the external inputs (eg., image sequences) become associated with internal models through a simple associative learning rule in which neurons that fire together, wire together. This model suggests that imagined movement could update the internal representations, and the learned association of these internal representations with external inputs might enable a recall of the corresponding inputs even when they are absent.
More broadly, Fiete’s research on cognitive mapping in the hippocampus is leading to some interesting predictions: “One of the conclusions we’re coming to in my group is that when you reconstruct a memory, the area that’s driving that reconstruction is the entorhinal cortex and hippocampus but the reconstruction may happen in the sensory periphery, using the representations that played a role in experiencing that stimulus in the first place,” Fiete explains. “So when I reconstruct an image, I’m likely using my visual cortex to do that reconstruction, driven by the hippocampal complex.” Signals from the entorhinal cortex to the visual cortex during navigation could help an animal visualize landmarks and find its way, even when those landmarks are not visible in the external world.
Landmark coding
Near the entorhinal cortex is the retrosplenial cortex, another brain area that seems to be important for navigation. It is positioned to integrate visual signals with information about the body’s position and movement through space. Both the retrosplenial cortex and the entorhinal cortex are among the first areas impacted by Alzheimer’s disease; spatial disorientation and navigation difficulties may be consequences of their degeneration.
Researchers suspect the retrosplenial cortex may be key to letting an animal know not just where something is, but also how to get there. McGovern Investigator Mark Harnett explains that to generate a cognitive map that can be used to navigate, an animal must understand not just where objects or other cues are in relationship to itself, but also where they are in relationship to each other.
In a study reported in eLife in 2020, Harnett and colleagues may have glimpsed both of these kinds of representations of space inside the brain. They watched neurons there light up as mice ran on a treadmill and tracked the passage of a virtual environment. As the mice became familiar with the landscape and learned where they were likely to find a reward, activity in the retrosplenial cortex changed.
“What we found was this representation started off sort of crude and mostly about what the animal was doing. And then eventually it became more about the task, the landscape, and the reward,” Harnett says.
Harnett’s team has since begun investigating how the retrosplenial cortex enables more complex spatial reasoning. They designed an experiment in which mice must understand many spatial relationships to access a treat. The experimental setup requires mice to consider the location of reward ports, the center of their environment, and their own viewing angle. Most of the time, they succeed. “They have to really do some triangulation, and the retrosplenial cortex seems to be critical for that,” Harnett says.
When the team monitored neural activity during the task, they found evidence that when an animal wasn’t quite sure where to go, its brain held on to multiple spatial hypotheses at the same time, until new information ruled one out.
Fiete, who has worked with Harnett to explore how neural circuits can execute this kind of spatial reasoning, points out that Jazayeri’s team has observed similar reasoning in animals that must make decisions based on temporarily ambiguous auditory cues. “In both cases, animals are able to hold multiple hypotheses in mind and do the inference,” she says. “Mark’s found that the retrosplenial cortex contains all the signals necessary to do that reasoning.”
Beyond spatial reasoning
As his team learns more about the how the brain creates and uses cognitive maps, Harnett hopes activity in the retrosplenial cortex will shed light on a fundamental aspect of the brain’s organization. The retrosplenial cortex doesn’t just receive information from the brain’s vision-processing center, it also sends signals back. He suspects these may direct the visual cortex to relay information that is particularly pertinent to forming or using a meaningful cognitive map.
“The brain’s navigation system is a beautiful playground.” – Ila Fiete
This kind of connectivity, where parts of the brain that carry out complex cognitive processing send signals back to regions that handle simpler functions, is common in the brain. Figuring out why is a key pursuit in Harnett’s lab. “I want to use that as a model for thinking about the larger cortical computations, because you see this kind of motif repeated in a lot of ways, and it’s likely key for understanding how learning works,” he says.
Fiete is particularly interested in unpacking the common set of principles that allow cell circuits to generate maps of both our physical environment and our abstract experiences. What is it about this set of brain areas and circuits that, on the one hand, permits specific map-building computations, and, on the other hand, generalizes across physical space and abstract experience?
“The brain’s navigation system is a beautiful playground,” she says, “and an amazing system in which to investigate all of these questions.”
Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.
Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.
The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.
“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”
Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.
Temporal windows
Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.
One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.
“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.
In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.
Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.
In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.
When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.
By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.
“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.
Processing words and meaning
These differences in temporal window size would have been impossible to see using fMRI, the researchers say.
“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.
The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.
Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.
“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”
The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.
The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.
Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.
“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”
Research topics
Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.
Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”
“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.
Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.
Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.
“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.
“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.
Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.
VBFF impact
Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.
“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”
There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.
Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.
“The brain has to operate on this knife’s edge between excitability and chaos.” – Earl K. Miller
“It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.
The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.
Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.
Losing consciousness
Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.
Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.
Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.
Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.
In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.
These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.
Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.
In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.
This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.
Better anesthesia control
To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.
“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.
As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”
The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.
If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.
“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”
The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.
“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.
The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute.
Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?
In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.
Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.
“I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.
“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”
Separating language and thought
For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically.
But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.
“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”
Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.
“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko
“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”
That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”
Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.
Language optimization
In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.
That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.
“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.
“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.