Finding some stability in adaptable brains

One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute.

In the August 27, 2024 of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

McGovern Institute Investigator Mark Harnett. Photo: Adam Glanzman

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Microscopic image of a mouse brain.
A confocal image of a mouse brain showing dLGN neurons in pink. Image: Courtney Yaeger, Mark Harnett.

Postdoctoral fellow Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells—a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region— formally known as the apical oblique dendrite domain—differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

Two rows of seven confocal microscope images of dendrites.
Representative oblique (top) and basal (bottom) dendrites from the same Layer 5 pyramidal neuron imaged across 7 days. Transient spines are labeled with yellow arrowheads the day before disappearance. Image: Courtney Yaeger, Mark Harnett.

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger.

“Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” – Courtney Yaeger

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize—further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability, they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: When an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.

Finding the way

This story also appears in the Fall 2024 issue of BrainScan.

___

When you arrive in a new city, every outing can be an exploration. You may know your way to a few places, but only if you follow a specific route. As you wander around a bit, get lost a few times, and familiarize yourself with some landmarks and where they are relative to each other, your brain develops a cognitive map of the space. You learn how things are laid out, and navigating gets easier.

It takes a lot to generate a useful mental map. “You have to understand the structure of relationships in the world,” says McGovern Investigator Mehrdad Jazayeri. “You need learning and experience to construct clever representations. The advantage is that when you have them, the world is an easier place to deal with.”

Indeed, Jazayeri says, internal models like these are the core of intelligent behavior.

Mehrdad Jazayeri (right) and graduate student Jack Gabel sit inside a rig designed to probe the brain’s ability to solve real-world problems with internal models. Photo: Steph Stevens

Many McGovern scientists see these cognitive maps as windows into their biggest questions about the brain: how it represents the external world, how it lets us learn and adapt, and how it forms and reconstructs memories. Researchers are learning that cells and strategies that the brain uses to understand the layout of a space also help track other kinds of structures in the world, too — from variations in sound to sequences of events. By studying how neurons behave as animals navigate their environments, McGovern researchers also expect to deepen their understanding of other important cognitive functions as well.

Decoding spatial maps

McGovern Investigator Ila Fiete builds theoretical models that help explain how spatial maps are formed in the brain. Previous research has shown that “place cells” and “grid cells” are place-sensitive neurons in the brain’s hippocampus and entorhinal cortex whose firing patterns help an animal map out a space. As an animal becomes familiar with its environment, subsets of these cells become tied to specific locations, firing only when the animal is in them.

Microscopic image of the mouse hippocampus
The brain’s ability to navigate the world is made possible by a brain circuit that includes the hippocampus (above), entorhinal cortex, and retrosplenial cortex. The firing pattern of “grid cells” and “place cells” in this circuit help form mental representations, or cognitive maps, of the external world. These brain regions are also among the first areas to be affected in people with Alzheimer’s, who often have trouble navigating. Image: Qian Chen, Guoping Feng

Fiete’s models have shown how these circuits can integrate information about movement, like signals from the muscles and vestibular system that change as an animal moves around, to calculate and update its estimate of an animal’s position in space. Fiete suspects the cells that do this can use the same strategy to keep track of other kinds of movement or change.

Mapping a space is about understanding where things are in relationship to one another, says Jazayeri, and tracking relationships is useful for modeling many kinds of structure in the world. For example, the hippocampus and entorhinal cortex are also closely linked to episodic memory, which keeps track of the connections between events and experiences.

“These brain areas are thought to be critical for learning relationships,” Jazayeri says.

Navigating virtual worlds

A key feature of cognitive maps is that they enable us to make predictions and respond to new situations without relying on immediate sensory cues. In a study published in Nature this June, Jazayeri and Fiete saw evidence of the brain’s ability to call up an internal model of an abstract domain: they watched neurons in the brain’s entorhinal cortex register a sequence of images, even when they were hidden from view.

Two scientists write equations on a glass wall with a marker.
Ila Fiete and postdoc Sarthak Chandra (right) develop theoretical models to study the brain. Photo: Steph Stevens

We can remember the layout of our home from far away or plan a walk through the neighborhood without stepping outside — so it may come as no surprise that the brain can call up its internal model in the absence of movement or sensory inputs. Indeed, previous research has shown that the circuits that encode physical space also encode abstract spaces like auditory sound sequences. But these experiments were performed in the presence of the stimuli, and Jazayeri and his team wanted to know whether simply imagining movement through an abstract domain may also evoke the same cognitive maps.

To test the entorhinal cortex’s ability to do this, Jazayeri and his team designed an experiment where animals had to “mentally” navigate through a previously explored, but now invisible, sequence of images. Working with Fiete, they found that the neurons that had become responsive to particular images in the visible sequence would also fire when mentally navigating the sequence in which images were hidden from view — suggesting the animal was conjuring a representation of the image in its mind.

Colored dots in the shape of a ring.
Ila Fiete has shown that the brain generates a one-dimensional ring of neural activity that acts as a compass. Here, head direction is indicated by color. Image: Ila Fiete

“You see these neurons in the entorhinal cortex undergo very clear dynamic patterns that are in correspondence with what we think the animal might be thinking at the time,” Jazayeri says. “They are updating themselves without any change out there in the world.”

The team then incorporated their data into a computational model to explore how neural circuits might form a mental model of abstract sequences. Their artificial circuit showed that the external inputs (eg., image sequences) become associated with internal models through a simple associative learning rule in which neurons that fire together, wire together. This model suggests that imagined movement could update the internal representations, and the learned association of these internal representations with external inputs might enable a recall of the corresponding inputs even when they are absent.

More broadly, Fiete’s research on cognitive mapping in the hippocampus is leading to some interesting predictions: “One of the conclusions we’re coming to in my group is that when you reconstruct a memory, the area that’s driving that reconstruction is the entorhinal cortex and hippocampus but the reconstruction may happen in the sensory periphery, using the representations that played a role in experiencing that stimulus in the first place,” Fiete explains. “So when I reconstruct an image, I’m likely using my visual cortex to do that reconstruction, driven by the hippocampal complex.” Signals from the entorhinal cortex to the visual cortex during navigation could help an animal visualize landmarks and find its way, even when those landmarks are not visible in the external world.

Landmark coding

Near the entorhinal cortex is the retrosplenial cortex, another brain area that seems to be important for navigation. It is positioned to integrate visual signals with information about the body’s position and movement through space. Both the retrosplenial cortex and the entorhinal cortex are among the first areas impacted by Alzheimer’s disease; spatial disorientation and navigation difficulties may be consequences of their degeneration.

Researchers suspect the retrosplenial cortex may be key to letting an animal know not just where something is, but also how to get there. McGovern Investigator Mark Harnett explains that to generate a cognitive map that can be used to navigate, an animal must understand not just where objects or other cues are in relationship to itself, but also where they are in relationship to each other.

In a study reported in eLife in 2020, Harnett and colleagues may have glimpsed both of these kinds of representations of space inside the brain. They watched neurons there light up as mice ran on a treadmill and tracked the passage of a virtual environment. As the mice became familiar with the landscape and learned where they were likely to find a reward, activity in the retrosplenial cortex changed.

A scientist looks at a computer monitor and adjusts a small wheel.
Lukas Fischer, a Harnett lab postdoc, operates a rig designed to study how mice navigate a virtual environment. Photo: Justin Knight

“What we found was this representation started off sort of crude and mostly about what the animal was doing. And then eventually it became more about the task, the landscape, and the reward,” Harnett says.

Harnett’s team has since begun investigating how the retrosplenial cortex enables more complex spatial reasoning. They designed an experiment in which mice must understand many spatial relationships to access a treat. The experimental setup requires mice to consider the location of reward ports, the center of their environment, and their own viewing angle. Most of the time, they succeed. “They have to really do some triangulation, and the retrosplenial cortex seems to be critical for that,” Harnett says.

When the team monitored neural activity during the task, they found evidence that when an animal wasn’t quite sure where to go, its brain held on to multiple spatial hypotheses at the same time, until new information ruled one out.

Fiete, who has worked with Harnett to explore how neural circuits can execute this kind of spatial reasoning, points out that Jazayeri’s team has observed similar reasoning in animals that must make decisions based on temporarily ambiguous auditory cues. “In both cases, animals are able to hold multiple hypotheses in mind and do the inference,” she says. “Mark’s found that the retrosplenial cortex contains all the signals necessary to do that reasoning.”

Beyond spatial reasoning

As his team learns more about the how the brain creates and uses cognitive maps, Harnett hopes activity in the retrosplenial cortex will shed light on a fundamental aspect of the brain’s organization. The retrosplenial cortex doesn’t just receive information from the brain’s vision-processing center, it also sends signals back. He suspects these may direct the visual cortex to relay information that is particularly pertinent to forming or using a meaningful cognitive map.

“The brain’s navigation system is a beautiful playground.” – Ila Fiete

This kind of connectivity, where parts of the brain that carry out complex cognitive processing send signals back to regions that handle simpler functions, is common in the brain. Figuring out why is a key pursuit in Harnett’s lab. “I want to use that as a model for thinking about the larger cortical computations, because you see this kind of motif repeated in a lot of ways, and it’s likely key for understanding how learning works,” he says.

Fiete is particularly interested in unpacking the common set of principles that allow cell circuits to generate maps of both our physical environment and our abstract experiences. What is it about this set of brain areas and circuits that, on the one hand, permits specific map-building computations, and, on the other hand, generalizes across physical space and abstract experience?

“The brain’s navigation system is a beautiful playground,” she says, “and an amazing system in which to investigate all of these questions.”

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Three MIT professors named 2024 Vannevar Bush Fellows

The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.

Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.

“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”

Research topics

Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.

Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”

“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.

Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.

Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.

“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.

“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.

Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.

VBFF impact

Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.

“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”

Study reveals how an anesthesia drug induces unconsciousness

There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.

Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.

“The brain has to operate on this knife’s edge between excitability and chaos.” – Earl K. Miller

“It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.

Losing consciousness

Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.

Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.

Woman gestures with her hand in front of a glass wall with equations written on it.
Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.

Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.

In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.

These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.

Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.

In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.

This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.

Better anesthesia control

To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.

“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.

As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”

The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.

If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.

“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”

The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.

“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.

The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute.

What is language for?

Press Mentions

Language is a defining feature of humanity, and for centuries, philosophers and scientists have contemplated its true purpose. We use language to share information and exchange ideas—but is it more than that? Do we use language not just to communicate, but to think?

In the June 19, 2024, issue of the journal Nature, McGovern Institute neuroscientist Evelina Fedorenko and colleagues argue that we do not. Language, they say, is primarily a tool for communication.

Fedorenko acknowledges that there is an intuitive link between language and thought. Many people experience an inner voice that seems to narrate their own thoughts. And it’s not unreasonable to expect that well-spoken, articulate individuals are also clear thinkers. But as compelling as these associations can be, they are not evidence that we actually use language to think.

 “I think there are a few strands of intuition and confusions that have led people to believe very strongly that language is the medium of thought,” she says.

“But when they are pulled apart thread by thread, they don’t really hold up to empirical scrutiny.”

Separating language and thought

For centuries, language’s potential role in facilitating thinking was nearly impossible to evaluate scientifically.

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

But neuroscientists and cognitive scientists now have tools that enable a more rigorous consideration of the idea. Evidence from both fields, which Fedorenko, MIT cognitive scientist and linguist Edward Gibson, and University of California Berkeley cognitive scientist Steven Piantadosi review in their Nature Perspective, supports the idea that language is a tool for communication, not for thought.

“What we’ve learned by using methods that actually tell us about the engagement of the linguistic processing mechanisms is that those mechanisms are not really engaged when we think,” Fedorenko says. Also, she adds, “you can take those mechanisms away, and it seems that thinking can go on just fine.”

Over the past 20 years, Fedorenko and other neuroscientists have advanced our understanding of what happens in the brain as it generates and understands language. Now, using functional MRI to find parts of the brain that are specifically engaged when someone reads or listens to sentences or passages, they can reliably identify an individual’s language-processing network. Then they can monitor those brain regions while the person performs other tasks, from solving a sudoku puzzle to reasoning about other people’s beliefs.

“Your language system is basically silent when you do all sorts of thinking.” – Ev Fedorenko

“Pretty much everything we’ve tested so far, we don’t see any evidence of the engagement of the language mechanisms,” Fedorenko says. “Your language system is basically silent when you do all sorts of thinking.”

That’s consistent with observations from people who have lost the ability to process language due to an injury or stroke. Severely affected patients can be completely unable to process words, yet this does not interfere with their ability to solve math problems, play chess, or plan for future events. “They can do all the things that they could do before their injury. They just can’t take those mental representations and convert them into a format which would allow them to talk about them with others,” Fedorenko says. “If language gives us the core representations that we use for reasoning, then…destroying the language system should lead to problems in thinking as well, and it really doesn’t.”

Conversely, intellectual impairments do not always associate with language impairment; people with intellectual disability disorders or neuropsychiatric disorders that limit their ability to think and reason do not necessarily have problems with basic linguistic functions. Just as language does not appear to be necessary for thought, Fedorenko and colleagues conclude that it is also not sufficient to produce clear thinking.

Language optimization

In addition to arguing that language is unlikely to be used for thinking, the scientists considered its suitability as a communication tool, drawing on findings from linguistic analyses. Analyses across dozens of diverse languages, both spoken and signed, have found recurring features that make them easy to produce and understand. “It turns out that pretty much any property you look at, you can find evidence that languages are optimized in a way that makes information transfer as efficient as possible,” Fedorenko says.

That’s not a new idea, but it has held up as linguists analyze larger corpora across more diverse sets of languages, which has become possible in recent years as the field has assembled corpora that are annotated for various linguistic features. Such studies find that across languages, sounds and words tend to be pieced together in ways that minimize effort for the language producer without muddling the message. For example, commonly used words tend to be short, while words whose meanings depend on one another tend to cluster close together in sentences. Likewise, linguists have noted features that help languages convey meaning despite potential “signal distortions,” whether due to attention lapses or ambient noise.

“All of these features seem to suggest that the forms of languages are optimized to make communication easier,” Fedorenko says, pointing out that such features would be irrelevant if language was primarily a tool for internal thought.

“Given that languages have all these properties, it’s likely that we use language for communication,” she says. She and her coauthors conclude that as a powerful tool for transmitting knowledge, language reflects the sophistication of human cognition—but does not give rise to it.

What is consciousness?

In the hit T.V. show “Westworld,” Dolores Abernathy, a golden-tressed belle, lives in the days when Manifest Destiny still echoed in America. She begins to notice unusual stirrings shaking up her quaint western town—and soon discovers that her skin is synthetic, and her mind, metal. She’s a cyborg meant to entertain humans. The key to her autonomy lies in reaching consciousness.

Shows like “Westworld” and other media probe the idea of consciousness, attempting to nail down a definition of the concept. However, though humans have ruminated on consciousness for centuries, we still don’t have a solid definition (even the Merriam-Webster dictionary lists five). One framework suggests that consciousness is any experience, from eating a candy bar to heartbreak. Another argues that it is how certain stimuli influence one’s behavior.

MIT graduate student Adam Eisen.

While some search for a philosophical explanation, MIT graduate student Adam Eisen seeks a scientific one.

Eisen studies consciousness in the labs of Ila Fiete, an associate investigator at the McGovern Institute, and Earl Miller, an investigator at the Picower Institute for Learning and Memory. His work melds seemingly opposite fields, using mathematical models to quantitatively explain, and thereby ground, the loftiness of consciousness.

In the Fiete lab, Eisen leverages computational methods to compare the brain’s electrical signals in an awake, conscious state to those in an unconscious state via anesthesia—which dampens communication between neurons so people feel no pain or become unconscious.

“What’s nice about anesthesia is that we have a reliable way of turning off consciousness,” says Eisen.

“So we’re now able to ask: What’s the fluctuation of electrical activity in a conscious versus unconscious brain? By characterizing how these states vary—with the precision enabled by computational models—we can start to build a better intuition for what underlies consciousness.”

Theories of consciousness

How are scientists thinking about consciousness? Eisen says that there are four major theories circulating in the neuroscience sphere. These theories are outlined below.

Global workspace theory

Consider the placement of your tongue in your mouth. This sensory information is always there, but you only notice the sensation when you make the effort to think about it. How does this happen?

“Global workspace theory seeks to explain how information becomes available to our consciousness,” he says. “This is called access consciousness—the kind that stores information in your mind and makes it available for verbal report. In this view, sensory information is broadcasted to higher-level regions of the brain by a process called ignition.” The theory proposes that widespread jolts of neuronal activity or “spiking” are essential for ignition, like how a few claps can lead to an audience applause. It’s through ignition that we reach consciousness.

Eisen’s research in anesthesia suggests, though, that not just any spiking will do. There needs to be a balance: enough activity to spark ignition, but also enough stability such that the brain doesn’t lose its ability to respond to inputs and produce reliable computations to reach consciousness.

Higher order theories

Let’s say you’re listening to “Here Comes The Sun” by The Beatles. Your brain processes the medley of auditory stimuli; you hear the bouncy guitar, upbeat drums, and George Harrison’s perky vocals. You’re having a musical experience—what it’s like to listen to music. According to higher-order theories, such an experience unlocks consciousness.

“Higher-order theories posit that a conscious mental state involves having higher-order mental representations of stimuli—usually in the higher levels of the brain responsible for cognition—to experience the world,” Eisen says.

Integrated information theory

“Imagine jumping into a lake on a warm summer day. All components of that experience—the feeling of the sun on your skin and the coolness of the water as you submerge—come together to form your ‘phenomenal consciousness,’” Eisen says. If the day was slightly less sunny or the water a fraction warmer, he explains, the experience would be different.

“Integrated information theory suggests that phenomenal consciousness involves an experience that is irreducible, meaning that none of the components of that experience can be separated or altered without changing the experience itself,” he says.

Attention schema theory

Attention schema theory, Eisen explains, says ‘attention’ is the information that we are focused on in the world, while ‘awareness’ is the model we have of our attention. He cites an interesting psychology study to disentangle attention and awareness.

In the study, the researchers showed human subjects a mixed sequence of two numbers and six letters on a computer. The participants were asked to report back what the numbers were. While they were doing this task, faintly detectable dots moved across the screen in the background. The interesting part, Eisen notes, is that people weren’t aware of the dots—that is, they didn’t report that they saw them. But despite saying they didn’t see the dots, people performed worse on the task when the dots were present.

“This suggests that some of the subjects’ attention was allocated towards the dots, limiting their available attention for the actual task,” he says. “In this case, people’s awareness didn’t track their attention. The subjects were not aware of the dots, even though the study shows that the dots did indeed affect their attention.”

The science behind consciousness

Eisen notes that a solid understanding of the neural basis of consciousness has yet to be cemented. However, he and his research team are advancing in this quest. “In our work, we found that brain activity is more ‘unstable’ under anesthesia, meaning that it lacks the ability to recover from disturbances—like distractions or random fluctuations in activity—and regain a normal state,” he says.

He and his fellow researchers believe this is because the unconscious brain can’t reliably engage in computations like the conscious brain does, and sensory information gets lost in the noise. This crucial finding points to how the brain’s stability may be a cornerstone of consciousness.

There’s still more work to do, Eisen says. But eventually, he hopes that this research can help crack the enduring mystery of how consciousness shapes human existence. “There is so much complexity and depth to human experience, emotion, and thought. Through rigorous research, we may one day reveal the machinery that gives us our common humanity.”

A new computational technique could make it easier to engineer useful proteins

To engineer proteins with useful functions, researchers usually begin with a natural protein that has a desirable function, such as emitting fluorescent light, and put it through many rounds of random mutation that eventually generate an optimized version of the protein.

This process has yielded optimized versions of many important proteins, including green fluorescent protein (GFP). However, for other proteins, it has proven difficult to generate an optimized version. MIT researchers have now developed a computational approach that makes it easier to predict mutations that will lead to better proteins, based on a relatively small amount of data.

Using this model, the researchers generated proteins with mutations that were predicted to lead to improved versions of GFP and a protein from adeno-associated virus (AAV), which is used to deliver DNA for gene therapy. They hope it could also be used to develop additional tools for neuroscience research and medical applications.

Woman gestures with her hand in front of a glass wall with equations written on it.
MIT Professor of Brain and Cognitive Sciences Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

“Protein design is a hard problem because the mapping from DNA sequence to protein structure and function is really complex. There might be a great protein 10 changes away in the sequence, but each intermediate change might correspond to a totally nonfunctional protein. It’s like trying to find your way to the river basin in a mountain range, when there are craggy peaks along the way that block your view. The current work tries to make the riverbed easier to find,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, director of the K. Lisa Yang Integrative Computational Neuroscience Center, and one of the senior authors of the study.

Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT, and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, are also senior authors of an open-access paper on the work, which will be presented at the International Conference on Learning Representations in May. MIT graduate students Andrew Kirjner and Jason Yim are the lead authors of the study. Other authors include Shahar Bracha, an MIT postdoc, and Raman Samusevich, a graduate student at Czech Technical University.

Optimizing proteins

Many naturally occurring proteins have functions that could make them useful for research or medical applications, but they need a little extra engineering to optimize them. In this study, the researchers were originally interested in developing proteins that could be used in living cells as voltage indicators. These proteins, produced by some bacteria and algae, emit fluorescent light when an electric potential is detected. If engineered for use in mammalian cells, such proteins could allow researchers to measure neuron activity without using electrodes.

While decades of research have gone into engineering these proteins to produce a stronger fluorescent signal, on a faster timescale, they haven’t become effective enough for widespread use. Bracha, who works in Edward Boyden’s lab at the McGovern Institute, reached out to Fiete’s lab to see if they could work together on a computational approach that might help speed up the process of optimizing the proteins.

“This work exemplifies the human serendipity that characterizes so much science discovery,” Fiete says.

“This work grew out of the Yang Tan Collective retreat, a scientific meeting of researchers from multiple centers at MIT with distinct missions unified by the shared support of K. Lisa Yang. We learned that some of our interests and tools in modeling how brains learn and optimize could be applied in the totally different domain of protein design, as being practiced in the Boyden lab.”

For any given protein that researchers might want to optimize, there is a nearly infinite number of possible sequences that could generated by swapping in different amino acids at each point within the sequence. With so many possible variants, it is impossible to test all of them experimentally, so researchers have turned to computational modeling to try to predict which ones will work best.

In this study, the researchers set out to overcome those challenges, using data from GFP to develop and test a computational model that could predict better versions of the protein.

They began by training a type of model known as a convolutional neural network (CNN) on experimental data consisting of GFP sequences and their brightness — the feature that they wanted to optimize.

The model was able to create a “fitness landscape” — a three-dimensional map that depicts the fitness of a given protein and how much it differs from the original sequence — based on a relatively small amount of experimental data (from about 1,000 variants of GFP).

These landscapes contain peaks that represent fitter proteins and valleys that represent less fit proteins. Predicting the path that a protein needs to follow to reach the peaks of fitness can be difficult, because often a protein will need to undergo a mutation that makes it less fit before it reaches a nearby peak of higher fitness. To overcome this problem, the researchers used an existing computational technique to “smooth” the fitness landscape.

Once these small bumps in the landscape were smoothed, the researchers retrained the CNN model and found that it was able to reach greater fitness peaks more easily. The model was able to predict optimized GFP sequences that had as many as seven different amino acids from the protein sequence they started with, and the best of these proteins were estimated to be about 2.5 times fitter than the original.

“Once we have this landscape that represents what the model thinks is nearby, we smooth it out and then we retrain the model on the smoother version of the landscape,” Kirjner says. “Now there is a smooth path from your starting point to the top, which the model is now able to reach by iteratively making small improvements. The same is often impossible for unsmoothed landscapes.”

Proof-of-concept

The researchers also showed that this approach worked well in identifying new sequences for the viral capsid of adeno-associated virus (AAV), a viral vector that is commonly used to deliver DNA. In that case, they optimized the capsid for its ability to package a DNA payload.

“We used GFP and AAV as a proof-of-concept to show that this is a method that works on data sets that are very well-characterized, and because of that, it should be applicable to other protein engineering problems,” Bracha says.

The researchers now plan to use this computational technique on data that Bracha has been generating on voltage indicator proteins.

“Dozens of labs having been working on that for two decades, and still there isn’t anything better,” she says. “The hope is that now with generation of a smaller data set, we could train a model in silico and make predictions that could be better than the past two decades of manual testing.”

The research was funded, in part, by the U.S. National Science Foundation, the Machine Learning for Pharmaceutical Discovery and Synthesis consortium, the Abdul Latif Jameel Clinic for Machine Learning in Health, the DTRA Discovery of Medical Countermeasures Against New and Emerging threats program, the DARPA Accelerated Molecular Discovery program, the Sanofi Computational Antibody Design grant, the U.S. Office of Naval Research, the Howard Hughes Medical Institute, the National Institutes of Health, the K. Lisa Yang ICoN Center, and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.

Researchers reveal roadmap for AI innovation in brain and language learning

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs?

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds.

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication.

“ChatGPT became available while we were finalizing the preprint,” explains Ivanova, who conducted the research while a postdoctoral researcher at MIT’s McGovern Institute. “Over the past year, we’ve had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we’re trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that’s not the case.

It’s a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs.

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs.

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.”

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models.

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.”

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it’s often useful to have some smaller system where you can actually go in and poke around and see what’s going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That’s where LLMs come in.

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.”

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system’s inner workings and modify variables, and explore these corresponding systems like never before.

“It’s a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

For people who speak many languages, there’s something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots — people who speak five or more languages — the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker’s native language. When listening to one’s native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

“Something makes it a little bit easier to process — maybe it’s that you’ve spent more time using that language — and you get a dip in activity for the native language compared to other languages that you speak proficiently,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

McGovern Investivator Ev Fedorenko in the Martinos Imaging Center at MIT. Photo: Caitlin Cunningham

The brain’s language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko’s lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

“With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency,” Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn’t speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from “Alice in Wonderland” translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants’ native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn’t need to work very hard to interpret it.

“As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you’ve had more experience with it,” Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don’t speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

“Here we’re getting a hint that the response in the language network scales up with how much you understand from the input,” Malik-Moraleda says. “We didn’t quantify the level of understanding here, but in the future we’re planning to evaluate how much people are truly understanding the passages that they’re listening to, and then see how that relates to the activation.”

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one’s native language.

“What we’re seeing here is that the language regions are engaged when we process all these languages, and then there’s this other network that comes in for non-native languages to help you out because it’s a harder task,” Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

The research was funded by the McGovern Institute for Brain Research, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.