Finding some stability in adaptable brains

One of the brain’s most celebrated qualities is its adaptability. Changes to neural circuits, whose connections are continually adjusted as we experience and interact with the world, are key to how we learn. But to keep knowledge and memories intact, some parts of the circuitry must be resistant to this constant change.

“Brains have figured out how to navigate this landscape of balancing between stability and flexibility, so that you can have new learning and you can have lifelong memory,” says neuroscientist Mark Harnett, an investigator at MIT’s McGovern Institute.

In the August 27, 2024 of the journal Cell Reports, Harnett and his team show how individual neurons can contribute to both parts of this vital duality. By studying the synapses through which pyramidal neurons in the brain’s sensory cortex communicate, they have learned how the cells preserve their understanding of some of the world’s most fundamental features, while also maintaining the flexibility they need to adapt to a changing world.

McGovern Institute Investigator Mark Harnett. Photo: Adam Glanzman

Visual connections

Pyramidal neurons receive input from other neurons via thousands of connection points. Early in life, these synapses are extremely malleable; their strength can shift as a young animal takes in visual information and learns to interpret it. Most remain adaptable into adulthood, but Harnett’s team discovered that some of the cells’ synapses lose their flexibility when the animals are less than a month old. Having both stable and flexible synapses means these neurons can combine input from different sources to use visual information in flexible ways.

Microscopic image of a mouse brain.
A confocal image of a mouse brain showing dLGN neurons in pink. Image: Courtney Yaeger, Mark Harnett.

Postdoctoral fellow Courtney Yaeger took a close look at these unusually stable synapses, which cluster together along a narrow region of the elaborately branched pyramidal cells. She was interested in the connections through which the cells receive primary visual information, so she traced their connections with neurons in a vision-processing center of the brain’s thalamus called the dorsal lateral geniculate nucleus (dLGN).

The long extensions through which a neuron receives signals from other cells are called dendrites, and they branch of from the main body of the cell into a tree-like structure. Spiny protrusions along the dendrites form the synapses that connect pyramidal neurons to other cells. Yaeger’s experiments showed that connections from the dLGN all led to a defined region of the pyramidal cells—a tight band within what she describes as the trunk of the dendritic tree.

Yaeger found several ways in which synapses in this region— formally known as the apical oblique dendrite domain—differ from other synapses on the same cells. “They’re not actually that far away from each other, but they have completely different properties,” she says.

Stable synapses

In one set of experiments, Yaeger activated synapses on the pyramidal neurons and measured the effect on the cells’ electrical potential. Changes to a neuron’s electrical potential generate the impulses the cells use to communicate with one another. It is common for a synapse’s electrical effects to amplify when synapses nearby are also activated. But when signals were delivered to the apical oblique dendrite domain, each one had the same effect, no matter how many synapses were stimulated. Synapses there don’t interact with one another at all, Harnett says. “They just do what they do. No matter what their neighbors are doing, they all just do kind of the same thing.”

Two rows of seven confocal microscope images of dendrites.
Representative oblique (top) and basal (bottom) dendrites from the same Layer 5 pyramidal neuron imaged across 7 days. Transient spines are labeled with yellow arrowheads the day before disappearance. Image: Courtney Yaeger, Mark Harnett.

The team was also able to visualize the molecular contents of individual synapses. This revealed a surprising lack of a certain kind of neurotransmitter receptor, called NMDA receptors, in the apical oblique dendrites. That was notable because of NMDA receptors’ role in mediating changes in the brain. “Generally when we think about any kind of learning and memory and plasticity, it’s NMDA receptors that do it,” Harnett says. “That is the by far most common substrate of learning and memory in all brains.”

When Yaeger stimulated the apical oblique synapses with electricity, generating patterns of activity that would strengthen most synapses, the team discovered a consequence of the limited presence of NMDA receptors. The synapses’ strength did not change. “There’s no activity-dependent plasticity going on there, as far as we have tested,” Yaeger says.

That makes sense, the researchers say, because the cells’ connections from the thalamus relay primary visual information detected by the eyes. It is through these connections that the brain learns to recognize basic visual features like shapes and lines.

“These synapses are basically a robust, high fidelity readout of this visual information,” Harnett explains. “That’s what they’re conveying, and it’s not context sensitive. So it doesn’t matter how many other synapses are active, they just do exactly what they’re going to do, and you can’t modify them up and down based on activity. So they’re very, very stable.”

“You actually don’t want those to be plastic,” adds Yaeger.

“Can you imagine going to sleep and then forgetting what a vertical line looks like? That would be disastrous.” – Courtney Yaeger

By conducting the same experiments in mice of different ages, the researchers determined that the synapses that connect pyramidal neurons to the thalamus become stable a few weeks after young mice first open their eyes. By that point, Harnett says, they have learned everything they need to learn. On the other hand, if mice spend the first weeks of their lives in the dark, the synapses never stabilize—further evidence that the transition depends on visual experience.

The team’s findings not only help explain how the brain balances flexibility and stability, they could help researchers teach artificial intelligence how to do the same thing. Harnett says artificial neural networks are notoriously bad at this: When an artificial neural network that does something well is trained to do something new, it almost always experiences “catastrophic forgetting” and can no longer perform its original task. Harnett’s team is exploring how they can use what they’ve learned about real brains to overcome this problem in artificial networks.

Harnessing the power of placebo for pain relief

Placebos are inert treatments, generally not expected to impact biological pathways or improve a person’s physical health. But time and again, some patients report that they feel better after taking a placebo. Increasingly, doctors and scientists are recognizing that rather than dismissing placebos as mere trickery, they may be able to help patients by harnessing their power.

To maximize the impact of the placebo effect and design reliable therapeutic strategies, researchers need a better understanding of how it works. Now, with a new animal model developed by scientists at the McGovern Institute, they will be able to investigate the neural circuits that underlie placebos’ ability to elicit pain relief.

“The brain and body interaction has a lot of potential, in a way that we don’t fully understand,” says McGovern investigator Fan Wang. “I really think there needs to be more of a push to understand placebo effect, in pain and probably in many other conditions. Now we have a strong model to probe the circuit mechanism.”

Context-dependent placebo effect

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

In the September 5, 2024, issue of the journal Current Biology, Wang and her team report that they have elicited strong placebo pain relief in mice by activating pain-suppressing neurons in the brain while the mice are in a specific environment—thereby teaching the animals that they feel better when they are in that context. Following their training, placing the mice in that environment alone is enough to suppress pain. The team’s experiments, which were funded by the National Institutes of Health, the K. Lisa Yang Brain-Body Center and the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics within MIT’s Yang Tan Collective show that this context-dependent placebo effect relieves both acute and chronic pain.

Context is critical for the placebo effect. While a pill can help a patient feel better when they expect it to, even if it is made only of sugar or starch, it seems to be not just the pill that sets up those expectations, but the entire scenario in which the pill is taken. For example, being in a hospital and interacting with doctors can contribute to a patient’s perception of care, and these social and environmental factors can make a placebo effect more probable.

Postdoctoral fellows Bin Chen and Nitsan Goldstein used visual and textural cues to define a specific place. Then they activated pain-suppressing neurons in the brain while the animals were in this “pain-relief box.” Those pain-suppressing neurons, which Wang’s lab discovered a few years ago, are located in an emotion-processing center of the brain called the central amygdala. By expressing light-sensitive channels in these neurons, the researchers were able to suppress pain with light in the pain-relief box and leave the neurons inactive when mice were in a control box.

Animals learned to prefer the pain-relief box to other environments. And when the researchers tested their response to potentially painful stimuli after they had made that association, they found the mice were less sensitive while they were there. “Just by being in the context that they had associated with pain suppression, we saw that reduced pain—even though we weren’t actually activating those [pain-suppressing] neurons,” Goldstein explains.

Acute and chronic pain relief

Some scientists have been able to elicit placebo pain relief in rodents by treating the animals with morphine, linking environmental cues to the pain suppression caused by the drugs similar to the way Wang’s team did by directly activating pain-suppressing neurons. This drug-based approach works best for setting up expectations of relief for acute pain; its placebo effect is short-lived and mostly ineffective against chronic pain. So Wang, Chen, and Goldstein were particularly pleased to find that their engineered placebo effect was effective for relieving both acute and chronic pain.

In their experiments, animals experiencing a chemotherapy-induced hypersensitivity to touch exhibited a preference for the pain relief box as much as animals who were exposed to a chemical that induces acute pain, days after their initial conditioning. Once there, their chemotherapy-induced pain sensitivity was eliminated; they exhibited no more sensitivity to painful stimuli than they had prior to receiving chemotherapy.

One of the biggest surprises came when the researchers turned their attention back to the pain-suppressing neurons in the central amygdala that they had used to trigger pain relief. They suspected that those neurons might be reactivated when mice returned to the pain-relief box. Instead, they found that after the initial conditioning period, those neurons remained quiet. “These neurons are not reactivated, yet the mice appear to be no longer in pain,” Wang says. “So it suggests this memory of feeling well is transferred somewhere else.”

Goldstein adds that there must be a pain-suppressing neural circuit somewhere that is activated by pain-relief-associated contexts—and the team’s new placebo model sets researchers up to investigate those pathways. A deeper understanding of that circuitry could enable clinicians to deploy the placebo effect—alone or in combination with active treatments—to better manage patients’ pain in the future.

Finding the way

This story also appears in the Fall 2024 issue of BrainScan.

___

When you arrive in a new city, every outing can be an exploration. You may know your way to a few places, but only if you follow a specific route. As you wander around a bit, get lost a few times, and familiarize yourself with some landmarks and where they are relative to each other, your brain develops a cognitive map of the space. You learn how things are laid out, and navigating gets easier.

It takes a lot to generate a useful mental map. “You have to understand the structure of relationships in the world,” says McGovern Investigator Mehrdad Jazayeri. “You need learning and experience to construct clever representations. The advantage is that when you have them, the world is an easier place to deal with.”

Indeed, Jazayeri says, internal models like these are the core of intelligent behavior.

Mehrdad Jazayeri (right) and graduate student Jack Gabel sit inside a rig designed to probe the brain’s ability to solve real-world problems with internal models. Photo: Steph Stevens

Many McGovern scientists see these cognitive maps as windows into their biggest questions about the brain: how it represents the external world, how it lets us learn and adapt, and how it forms and reconstructs memories. Researchers are learning that cells and strategies that the brain uses to understand the layout of a space also help track other kinds of structures in the world, too — from variations in sound to sequences of events. By studying how neurons behave as animals navigate their environments, McGovern researchers also expect to deepen their understanding of other important cognitive functions as well.

Decoding spatial maps

McGovern Investigator Ila Fiete builds theoretical models that help explain how spatial maps are formed in the brain. Previous research has shown that “place cells” and “grid cells” are place-sensitive neurons in the brain’s hippocampus and entorhinal cortex whose firing patterns help an animal map out a space. As an animal becomes familiar with its environment, subsets of these cells become tied to specific locations, firing only when the animal is in them.

Microscopic image of the mouse hippocampus
The brain’s ability to navigate the world is made possible by a brain circuit that includes the hippocampus (above), entorhinal cortex, and retrosplenial cortex. The firing pattern of “grid cells” and “place cells” in this circuit help form mental representations, or cognitive maps, of the external world. These brain regions are also among the first areas to be affected in people with Alzheimer’s, who often have trouble navigating. Image: Qian Chen, Guoping Feng

Fiete’s models have shown how these circuits can integrate information about movement, like signals from the muscles and vestibular system that change as an animal moves around, to calculate and update its estimate of an animal’s position in space. Fiete suspects the cells that do this can use the same strategy to keep track of other kinds of movement or change.

Mapping a space is about understanding where things are in relationship to one another, says Jazayeri, and tracking relationships is useful for modeling many kinds of structure in the world. For example, the hippocampus and entorhinal cortex are also closely linked to episodic memory, which keeps track of the connections between events and experiences.

“These brain areas are thought to be critical for learning relationships,” Jazayeri says.

Navigating virtual worlds

A key feature of cognitive maps is that they enable us to make predictions and respond to new situations without relying on immediate sensory cues. In a study published in Nature this June, Jazayeri and Fiete saw evidence of the brain’s ability to call up an internal model of an abstract domain: they watched neurons in the brain’s entorhinal cortex register a sequence of images, even when they were hidden from view.

Two scientists write equations on a glass wall with a marker.
Ila Fiete and postdoc Sarthak Chandra (right) develop theoretical models to study the brain. Photo: Steph Stevens

We can remember the layout of our home from far away or plan a walk through the neighborhood without stepping outside — so it may come as no surprise that the brain can call up its internal model in the absence of movement or sensory inputs. Indeed, previous research has shown that the circuits that encode physical space also encode abstract spaces like auditory sound sequences. But these experiments were performed in the presence of the stimuli, and Jazayeri and his team wanted to know whether simply imagining movement through an abstract domain may also evoke the same cognitive maps.

To test the entorhinal cortex’s ability to do this, Jazayeri and his team designed an experiment where animals had to “mentally” navigate through a previously explored, but now invisible, sequence of images. Working with Fiete, they found that the neurons that had become responsive to particular images in the visible sequence would also fire when mentally navigating the sequence in which images were hidden from view — suggesting the animal was conjuring a representation of the image in its mind.

Colored dots in the shape of a ring.
Ila Fiete has shown that the brain generates a one-dimensional ring of neural activity that acts as a compass. Here, head direction is indicated by color. Image: Ila Fiete

“You see these neurons in the entorhinal cortex undergo very clear dynamic patterns that are in correspondence with what we think the animal might be thinking at the time,” Jazayeri says. “They are updating themselves without any change out there in the world.”

The team then incorporated their data into a computational model to explore how neural circuits might form a mental model of abstract sequences. Their artificial circuit showed that the external inputs (eg., image sequences) become associated with internal models through a simple associative learning rule in which neurons that fire together, wire together. This model suggests that imagined movement could update the internal representations, and the learned association of these internal representations with external inputs might enable a recall of the corresponding inputs even when they are absent.

More broadly, Fiete’s research on cognitive mapping in the hippocampus is leading to some interesting predictions: “One of the conclusions we’re coming to in my group is that when you reconstruct a memory, the area that’s driving that reconstruction is the entorhinal cortex and hippocampus but the reconstruction may happen in the sensory periphery, using the representations that played a role in experiencing that stimulus in the first place,” Fiete explains. “So when I reconstruct an image, I’m likely using my visual cortex to do that reconstruction, driven by the hippocampal complex.” Signals from the entorhinal cortex to the visual cortex during navigation could help an animal visualize landmarks and find its way, even when those landmarks are not visible in the external world.

Landmark coding

Near the entorhinal cortex is the retrosplenial cortex, another brain area that seems to be important for navigation. It is positioned to integrate visual signals with information about the body’s position and movement through space. Both the retrosplenial cortex and the entorhinal cortex are among the first areas impacted by Alzheimer’s disease; spatial disorientation and navigation difficulties may be consequences of their degeneration.

Researchers suspect the retrosplenial cortex may be key to letting an animal know not just where something is, but also how to get there. McGovern Investigator Mark Harnett explains that to generate a cognitive map that can be used to navigate, an animal must understand not just where objects or other cues are in relationship to itself, but also where they are in relationship to each other.

In a study reported in eLife in 2020, Harnett and colleagues may have glimpsed both of these kinds of representations of space inside the brain. They watched neurons there light up as mice ran on a treadmill and tracked the passage of a virtual environment. As the mice became familiar with the landscape and learned where they were likely to find a reward, activity in the retrosplenial cortex changed.

A scientist looks at a computer monitor and adjusts a small wheel.
Lukas Fischer, a Harnett lab postdoc, operates a rig designed to study how mice navigate a virtual environment. Photo: Justin Knight

“What we found was this representation started off sort of crude and mostly about what the animal was doing. And then eventually it became more about the task, the landscape, and the reward,” Harnett says.

Harnett’s team has since begun investigating how the retrosplenial cortex enables more complex spatial reasoning. They designed an experiment in which mice must understand many spatial relationships to access a treat. The experimental setup requires mice to consider the location of reward ports, the center of their environment, and their own viewing angle. Most of the time, they succeed. “They have to really do some triangulation, and the retrosplenial cortex seems to be critical for that,” Harnett says.

When the team monitored neural activity during the task, they found evidence that when an animal wasn’t quite sure where to go, its brain held on to multiple spatial hypotheses at the same time, until new information ruled one out.

Fiete, who has worked with Harnett to explore how neural circuits can execute this kind of spatial reasoning, points out that Jazayeri’s team has observed similar reasoning in animals that must make decisions based on temporarily ambiguous auditory cues. “In both cases, animals are able to hold multiple hypotheses in mind and do the inference,” she says. “Mark’s found that the retrosplenial cortex contains all the signals necessary to do that reasoning.”

Beyond spatial reasoning

As his team learns more about the how the brain creates and uses cognitive maps, Harnett hopes activity in the retrosplenial cortex will shed light on a fundamental aspect of the brain’s organization. The retrosplenial cortex doesn’t just receive information from the brain’s vision-processing center, it also sends signals back. He suspects these may direct the visual cortex to relay information that is particularly pertinent to forming or using a meaningful cognitive map.

“The brain’s navigation system is a beautiful playground.” – Ila Fiete

This kind of connectivity, where parts of the brain that carry out complex cognitive processing send signals back to regions that handle simpler functions, is common in the brain. Figuring out why is a key pursuit in Harnett’s lab. “I want to use that as a model for thinking about the larger cortical computations, because you see this kind of motif repeated in a lot of ways, and it’s likely key for understanding how learning works,” he says.

Fiete is particularly interested in unpacking the common set of principles that allow cell circuits to generate maps of both our physical environment and our abstract experiences. What is it about this set of brain areas and circuits that, on the one hand, permits specific map-building computations, and, on the other hand, generalizes across physical space and abstract experience?

“The brain’s navigation system is a beautiful playground,” she says, “and an amazing system in which to investigate all of these questions.”

Scientists find neurons that process language on different timescales

Using functional magnetic resonance imaging (fMRI), neuroscientists have identified several regions of the brain that are responsible for processing language. However, discovering the specific functions of neurons in those regions has proven difficult because fMRI, which measures changes in blood flow, doesn’t have high enough resolution to reveal what small populations of neurons are doing.

Now, using a more precise technique that involves recording electrical activity directly from the brain, MIT neuroscientists have identified different clusters of neurons that appear to process different amounts of linguistic context. These “temporal windows” range from just one word up to about six words.

The temporal windows may reflect different functions for each population, the researchers say. Populations with shorter windows may analyze the meanings of individual words, while those with longer windows may interpret more complex meanings created when words are strung together.

“This is the first time we see clear heterogeneity within the language network,” says Evelina Fedorenko, an associate professor of neuroscience at MIT. “Across dozens of fMRI experiments, these brain areas all seem to do the same thing, but it’s a large, distributed network, so there’s got to be some structure there. This is the first clear demonstration that there is structure, but the different neural populations are spatially interleaved so we can’t see these distinctions with fMRI.”

Fedorenko, who is also a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study, which appears today in Nature Human Behavior. MIT postdoc Tamar Regev and Harvard University graduate student Colton Casto are the lead authors of the paper.

Temporal windows

Functional MRI, which has helped scientists learn a great deal about the roles of different parts of the brain, works by measuring changes in blood flow in the brain. These measurements act as a proxy of neural activity during a particular task. However, each “voxel,” or three-dimensional chunk, of an fMRI image represents hundreds of thousands to millions of neurons and sums up activity across about two seconds, so it can’t reveal fine-grained detail about what those neurons are doing.

One way to get more detailed information about neural function is to record electrical activity using electrodes implanted in the brain. These data are hard to come by because this procedure is done only in patients who are already undergoing surgery for a neurological condition such as severe epilepsy.

“It can take a few years to get enough data for a task because these patients are relatively rare, and in a given patient electrodes are implanted in idiosyncratic locations based on clinical needs, so it takes a while to assemble a dataset with sufficient coverage of some target part of the cortex. But these data, of course, are the best kind of data we can get from human brains: You know exactly where you are spatially and you have very fine-grained temporal information,” Fedorenko says.

In a 2016 study, Fedorenko reported using this approach to study the language processing regions of six people. Electrical activity was recorded while the participants read four different types of language stimuli: complete sentences, lists of words, lists of non-words, and “jabberwocky” sentences — sentences that have grammatical structure but are made of nonsense words.

Those data showed that in some neural populations in language processing regions, activity would gradually build up over a period of several words, when the participants were reading sentences. However, this did not happen when they read lists of words, lists of nonwords, of Jabberwocky sentences.

In the new study, Regev and Casto went back to those data and analyzed the temporal response profiles in greater detail. In their original dataset, they had recordings of electrical activity from 177 language-responsive electrodes across the six patients. Conservative estimates suggest that each electrode represents an average of activity from about 200,000 neurons. They also obtained new data from a second set of 16 patients, which included recordings from another 362 language-responsive electrodes.

When the researchers analyzed these data, they found that in some of the neural populations, activity would fluctuate up and down with each word. In others, however, activity would build up over multiple words before falling again, and yet others would show a steady buildup of neural activity over longer spans of words.

By comparing their data with predictions made by a computational model that the researchers designed to process stimuli with different temporal windows, the researchers found that neural populations from language processing areas could be divided into three clusters. These clusters represent temporal windows of either one, four, or six words.

“It really looks like these neural populations integrate information across different timescales along the sentence,” Regev says.

Processing words and meaning

These differences in temporal window size would have been impossible to see using fMRI, the researchers say.

“At the resolution of fMRI, we don’t see much heterogeneity within language-responsive regions. If you localize in individual participants the voxels in their brain that are most responsive to language, you find that their responses to sentences, word lists, jabberwocky sentences and non-word lists are highly similar,” Casto says.

The researchers were also able to determine the anatomical locations where these clusters were found. Neural populations with the shortest temporal window were found predominantly in the posterior temporal lobe, though some were also found in the frontal or anterior temporal lobes. Neural populations from the two other clusters, with longer temporal windows, were spread more evenly throughout the temporal and frontal lobes.

Fedorenko’s lab now plans to study whether these timescales correspond to different functions. One possibility is that the shortest timescale populations may be processing the meanings of a single word, while those with longer timescales interpret the meanings represented by multiple words.

“We already know that in the language network, there is sensitivity to how words go together and to the meanings of individual words,” Regev says. “So that could potentially map to what we’re finding, where the longest timescale is sensitive to things like syntax or relationships between words, and maybe the shortest timescale is more sensitive to features of single words or parts of them.”

The research was funded by the Zuckerman-CHE STEM Leadership Program, the Poitras Center for Psychiatric Disorders Research, the Kempner Institute for the Study of Natural and Artificial Intelligence at Harvard University, the U.S. National Institutes of Health, an American Epilepsy Society Research and Training Fellowship, the McDonnell Center for Systems Neuroscience, Fondazione Neurone, the McGovern Institute, MIT’s Department of Brain and Cognitive Sciences, and the Simons Center for the Social Brain.

Three MIT professors named 2024 Vannevar Bush Fellows

The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.

Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.

“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”

Research topics

Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.

Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”

“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.

Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.

Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.

“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.

“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.

Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.

VBFF impact

Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.

“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”

Four MIT faculty named 2024 HHMI Investigators

The Howard Hughes Medical Institute (HHMI) today announced its 2024 investigators, four of whom hail from the School of Science at MIT: Steven Flavell, Mary Gehring, Mehrad Jazayeri, and Gene-Wei Li.

Four others with MIT ties were also honored: Jonathan Abraham, graduate of the Harvard/MIT MD-PhD Program; Dmitriy Aronov PhD ’10; Vijay Sankaran, graduate of the Harvard/MIT MD-PhD Program; and Steven McCarroll, institute member of the Broad Institute of MIT and Harvard.

Every three years, HHMI selects roughly two dozen new investigators who have significantly impacted their chosen disciplines to receive a substantial and completely discretionary grant. This funding can be reviewed and renewed indefinitely. The award, which totals roughly $11 million per investigator over the next seven years, enables scientists to continue working at their current institution, paying their full salary while providing financial support for researchers to be flexible enough to go wherever their scientific inquiries take them.

Of the almost 1,000 applicants this year, 26 investigators were selected for their ability to push the boundaries of science and for their efforts to create highly inclusive and collaborative research environments.

“When scientists create environments in which others can thrive, we all benefit,” says HHMI president Erin O’Shea. “These newest HHMI Investigators are extraordinary, not only because of their outstanding research endeavors but also because they mentor and empower the next generation of scientists to work alongside them at the cutting edge.”

Steven Flavell

Steven Flavell, associate professor of brain and cognitive sciences and investigator in the Picower Institute for Learning and Memory, seeks to uncover the neural mechanisms that generate the internal states of the brain, for example, different motivational and arousal states. Working in the model organism, the C. elegans worm, the lab has used genetic, systems, and computational approaches to relate neural activity across the brain to precise features of the animal’s behavior. In addition, they have mapped out the anatomical and functional organization of the serotonin system, mapping out how it modulates the internal state of C. elegans. As a newly named HHMI Investigator, Flavell will pursue research that he hopes will build a foundational understanding of how internal states arise and influence behavior in nervous systems in general. The work will employ brain-wide neural recordings, computational modeling, expansive research on neuromodulatory system organization, and studies of how the synaptic wiring of the nervous system constrains an animal’s ability to generate different internal states.

“I think that it should be possible to define the basis of internal states in C. elegans in concrete terms,” Flavell says. “If we can build a thread of understanding from the molecular architecture of neuromodulatory systems, to changes in brain-wide activity, to state-dependent changes in behavior, then I think we’ll be in a much better place as a field to think about the basis of brain states in more complex animals.”

Mary Gehring

Mary Gehring, professor of biology and core member and David Baltimore Chair in Biomedical Research at the Whitehead Institute for Biomedical Research, studies how plant epigenetics modulates plant growth and development, with a long-term goal of uncovering the essential genetic and epigenetic elements of plant seed biology. Ultimately, the Gehring Lab’s work provides the scientific foundations for engineering alternative modes of seed development and improving plant resiliency at a time when worldwide agriculture is in a uniquely precarious position due to climate changes.

The Gehring Lab uses genetic, genomic, computational, synthetic, and evolutionary approaches to explore heritable traits by investigating repetitive sequences, DNA methylation, and chromatin structure. The lab primarily uses the model plant A. thaliana, a member of the mustard family and the first plant to have its genome sequenced.

“I’m pleased that HHMI has been expanding its support for plant biology, and gratified that our lab will benefit from its generous support,” Gehring says. “The appointment gives us the freedom to step back, take a fresh look at the scientific opportunities before us, and pursue the ones that most interest us. And that’s a very exciting prospect.”

Mehrdad Jazayeri

Mehrdad Jazayeri, a professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, studies how physiological processes in the brain give rise to the abilities of the mind. Work in the Jazayeri Lab brings together ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of how the brain creates internal representations, or models, of the external world.

Before coming to MIT in 2013, Jazayeri received his BS in electrical engineering, majoring in telecommunications, from Sharif University of Technology in Tehran, Iran. He completed his MS in physiology at the University of Toronto and his PhD in neuroscience at New York University.

With his appointment to HHMI, Jazayeri plans to explore how the brain enables rapid learning and flexible behavior — central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

“This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

Gene-Wei Li

Gene-Wei Li, associate professor of biology, has been working on quantifying the amount of proteins cells produce and how protein synthesis is orchestrated within the cell since opening his lab at MIT in 2015.

Li, whose background is in physics, credits the lab’s findings to the skills and communication among his research team, allowing them to explore the unexpected questions that arise in the lab.

For example, two of his graduate student researchers found that the coordination between transcription and translation fundamentally differs between the model organisms E. coli and B. subtilis. In B. subtilis, the ribosome lags far behind RNA polymerase, a process the lab termed “runaway transcription.” The discovery revealed that this kind of uncoupling between transcription and translation is widespread across many species of bacteria, a study that contradicted the long-standing dogma of molecular biology that the machinery of protein synthesis and RNA polymerase work side-by-side in all bacteria.

The support from HHMI enables Li and his team the flexibility to pursue the basic research that leads to discoveries at their discretion.

“Having this award allows us to be bold and to do things at a scale that wasn’t possible before,” Li says. “The discovery of runaway transcription is a great example. We didn’t have a traditional grant for that.”

Mehrdad Jazayeri selected as an HHMI investigator

The Howard Hughes Medical Institute (HHMI) has named McGovern Institute neuroscientist Mehrdad Jazayeri as one of 26 new HHMI investigators—a group of visionary scientists who HHMI will support with more than $300 million over the next seven years.

Support from HHMI is intended to give its investigators, who work at institutions across the United States, the time and resources they need to push the boundaries of the biological sciences. Jazayeri, whose work integrates neurobiology with cognitive science and machine learning, plans to use that support to explore how the brain enables rapid learning and flexible behavior—central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

Jazayeri says he is delighted and honored by the news. “This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

An unexpected path

Jazayeri, who has been an investigator at the McGovern Institute since 2013, has already made a series of groundbreaking discoveries about how physiological processes in the brain give rise to the abilities of the mind. “That’s what we do really well,” he says. “We expose the computational link between abstract mental concepts, like belief, and electrical signals in the brain,” he says.

Jazayeri’s expertise and enthusiasm for this work grew out a curiosity that was sparked unexpectedly several years after he’d abandoned university education. He’d pursued his undergraduate studies in electrical engineering, a path with good job prospects in Iran where he lived. But an undergraduate program at Sharif University of Technology in Tehran left him disenchanted. “It was an uninspiring experience,” he says. “It’s a top university and I went there excited, but I lost interest as I couldn’t think of a personally meaningful application for my engineering skills. So, after my undergrad, I started a string of random jobs, perhaps to search for my passion.”

A few years later, Jazayeri was trying something new, happily living and working at a banana farm near the Caspian Sea. The farm schedule allowed for leisure in the evenings, which he took advantage of by delving into boxes full of books that an uncle regularly sent him from London. The books were an unpredictable, eclectic mix. Jazayeri read them all—and it was those that talked about the brain that most captured his imagination.

Until then, he had never had much interest in biology. But when he read about neurological disorders and how scientists were studying the brain, he was captivated. The subject seemed to merge his inherent interest in philosophy with an analytical approach that he also loved. “These books made me think that you actually can understand this system at a more concrete level…you can put electrodes in the brain and listen to what neurons say,” he says. “It had never even occurred to me to think about those things.”

He wanted to know more. It took time to find a graduate program in neuroscience that would accept a student with his unconventional background, but eventually the University of Toronto accepted him into a master’s program after he crammed for and passed an undergraduate exam testing his knowledge of physiology. From there, he went on to earn a PhD in neuroscience from New York University studying visual perception, followed by a postdoctoral fellowship at the University of Washington where he studied time perception.

In 2013, Jazayeri joined MIT’s Department of Brain and Cognitive Sciences. At MIT, conversations with new colleagues quickly enriched the way he thought about the brain. “It is fascinating to listen to cognitive scientists’ ideas about the mind,” he says. “They have a rich and deep understanding of the mind but the language they use to describe the mind is not the language of the brain. Bridging this gap in language between neuroscience and cognitive science is at the core of research in my lab.”

His lab’s general approach has been to collect data on neural activity from humans and animals as they perform tasks that call on specific aspects of the mind. “We design tasks that are as simple as possible but get at the crux of the problems in cognitive science,” he explains. “Then we build models that help us connect abstract concepts and theories in cognitive science to signals and dynamics of neural activity in the brain.”

It’s an interdisciplinary approach that even calls on many of the engineering approaches that had failed to inspire him as a student. Students and postdocs in the lab bring a diverse set of knowledge and skills, and together the team has made significant contributions to neuroscience, cognitive science, and computational science.

With animals trained to reproduce a rhythm, they’ve shown how neurons adjust the speed of their signals to predict when something will occur, and what happens when the actual timing of a stimulus deviates from the brain’s expectations.

Studies of time interval predictions have also helped the team learn how the brain weighs different pieces of information as it assesses situations and makes decisions. This process, called Bayesian integration, shapes our beliefs and our confidence in those beliefs. “These are really fundamental concepts in cognitive sciences, and we can now say how neurons exactly do that,” he says.

More recently, by teaching animals to navigate a virtual environment, Jazayeri’s team has found activity in the brain that appears to call up a cognitive map of a space even when its features are not visible. The discovery helps reveal how the brain builds internal models and uses them to interact with the world.

A new paradigm

Jazayeri is proud of these achievements. But he knows that when it comes to understanding the power and complexity of cognition, something is missing.

“Two really important hallmarks of cognition are the ability to learn rapidly and generalize flexibly. If somebody can do that, we say they’re intelligent,” he says. It’s an ability we have from an early age. “If you bring a kid a bunch of toys, they don’t need several years of training, they just can play with the toys right away in very creative ways,” he says. In the wild, many animals are similarly adept at problem solving and finding uses for new tools. But when animals are trained for many months on a single task, as typically happens in a lab, they don’t behave as intelligently. “They become like an expert that does one thing well, but they’re no longer very flexible,” he says.

Figuring out how the brain adapts and acts flexibly in real-world situations in going to require a new approach. “What we have done is that we come up with a task, and then change the animal’s brain through learning to match our task,” he says. “What we now want to do is to add a new paradigm to our work, one in which we will devise the task such that it would match the animal’s brain.”

As an HHMI investigator, Jazayeri plans to take advantage of a host of new technologies to study the brain’s involvement in ecologically relevant behaviors. That means moving beyond the virtual scenarios and digital platforms that have been so widespread in neuroscience labs, including his own, and instead letting animals interact with real objects and environments. “The animal will use its eyes and hands to engage with physical objects in the real world,” he says.

To analyze and learn about animals’ behavior, the team plans detailed tracking of hand and eye movements, and even measurements of sensations that are felt through the hands as animals explore objects and work through problems. These activities are expected to engage the entire brain, so the team will broadly record and analyze neural activity.

Designing meaningful experiments and making sense of the data will be a deeply interdisciplinary endeavor, and Jazayeri knows working with a collaborative community of scientists will be essential. He’s looking forward to sharing the enormous amount of relevant data his lab expects to collect with the research community and getting others involved. Likewise, as a dedicated mentor, he is committed to training scientists who will continue and expand the work in the future.

He is enthusiastic about the opportunity to move into these bigger questions about cognition and intelligence, and support from HHMI comes at an opportune moment. “I think we have now built the infrastructure and conceptual frameworks to think about these problems, and technology for recording and tracking animals has developed a great deal, so we can now do more naturalistic experiments,” he says.

His passion for his work is one of many passions in his life. His love for family, friends, and art are just as deep, and making space to experience everything is a lifelong struggle. But he knows his zeal is infectious. “I think my love for science is probably one of the best motivators of people around me,” he says.

License plates of MIT

What does your license plate say about you?

In the United States, more than 9 million vehicles carry personalized “vanity” license plates, in which preferred words, digits, or phrases replace an otherwise random assignment of letters and numbers to identify a vehicle. While each state and the District of Columbia maintains its own rules about appropriate selections, creativity reigns when choosing a unique vanity plate. What’s more, the stories behind them can be just as fascinating as the people who use them.

It might not come as a surprise to learn that quite a few MIT community members have participated in such vehicular whimsy. Read on to meet some of them and learn about the nerdy, artsy, techy, and MIT-related plates that color their rides.

A little piece of tech heaven

One of the most recognized vehicles around campus is Samuel Klein’s 1998 Honda Civic. More than just the holder of a vanity plate, it’s an art car — a vehicle that’s been custom-designed as a way to express an artistic idea or theme. Klein’s Civic is covered with hundreds of 5.5-inch floppy disks in various colors, and it sports disks, computer keys, and other techy paraphernalia on the interior. With its double-entendre vanity plate, “DSKDRV” (“disk drive”), the art car initially came into being on the West Coast.

Klein, a longtime affiliate of the MIT Media Lab, MIT Press, and MIT Libraries, first heard about the car from fellow Wikimedian and current MIT librarian Phoebe Ayers. An artistic friend of Ayers’, Lara Wiegand, had designed and decorated the car in Seattle but wanted to find a new owner. Klein was intrigued and decided to fly west to check the Civic out.

“I went out there, spent a whole afternoon seeing how she maintained the car and talking about engineering and mechanisms and the logistics of what’s good and bad,” Klein says. “It had already gone through many iterations.”

Klein quickly decided he was up to the task of becoming the new owner. As he drove the car home across the country, it “got a wide range of really cool responses across different parts of the U.S.”

Back in Massachusetts, Klein made a few adjustments: “We painted the hubcaps, we added racing stripes, we added a new generation of laser-etched glass circuits and, you know, I had my own collection of antiquated technology disks that seemed to fit.”

The vanity plate also required a makeover. In Washington state it was “DISKDRV,” but, Klein says, “we had to shave the license plate a bit because there are fewer letters in Massachusetts.”

Today, the car has about 250,000 miles and an Instagram account. “The biggest challenge is just the disks have to be resurfaced, like a lizard, every few years,” says Klein, whose partner, an MIT research scientist, often parks it around campus. “There’s a small collection of love letters for the car. People leave the car notes. It’s very sweet.”

Marking his place in STEM history

Omar Abudayyeh ’12, PhD ’18, a recent McGovern Fellow at the McGovern Institute for Brain Research at MIT who is now an assistant professor at Harvard Medical School, shares an equally riveting story about his vanity plate, “CRISPR,” which adorns his sport utility vehicle.

The plate refers to the genome-editing technique that has revolutionized biological and medical research by enabling rapid changes to genetic material. As an MIT graduate student in the lab of Professor Feng Zhang, a pioneering contributor to CRISPR technologies, Abudayyeh was highly involved in early CRISPR development for DNA and RNA editing. In fact, he and Jonathan Gootenberg ’13, another recent McGovern Fellow and assistant professor at Harvard Medical School who works closely with Abudayyeh, discovered many novel CRISPR enzymes, such as Cas12 and Cas13, and applied these technologies for both gene therapy and CRISPR diagnostics.

So how did Abudayyeh score his vanity plate? It was all due to his attendance at a genome-editing conference in 2022, where another early-stage CRISPR researcher, Samuel Sternberg, showed up in a car with New York “CRISPR” plates. “It became quite a source of discussion at the conference, and at one of the breaks, Sam and his labmates egged us on to get the Massachusetts license plate,” Abudayyeh explains. “I insisted that it must be taken, but I applied anyway, paying the 70 dollars and then receiving a message that I would get a letter eight to 12 weeks later about whether the plate was available or not. I then returned to Boston and forgot about it until a couple months later when, to my surprise, the plate arrived in the mail.”

While Abudayyeh continues his affiliation with the McGovern Institute, he and Gootenberg recently set up a lab at Harvard Medical School as new faculty members. “We have continued to discover new enzymes, such as Cas7-11, that enable new frontiers, such as programmable proteases for RNA sensing and novel therapeutics, and we’ve applied CRISPR technologies for new efforts in gene editing and aging research,” Abudayyeh notes.

As for his license plate, he says, “I’ve seen instances of people posting about it on Twitter or asking about it in Slack channels. A number of times, people have stopped me to say they read the Walter Isaacson book on CRISPR, asking how I was related to it. I would then explain my story — and describe how I’m actually in the book, in the chapters on CRISPR diagnostics.”

Displaying MIT roots, nerd pride

For some, a connection to MIT is all the reason they need to register a vanity plate — or three. Jeffrey Chambers SM ’06, PhD ’14, a graduate of the Department of Aeronautics and Astronautics, shares that he drives with a Virginia license plate touting his “PHD MIT.” Professor of biology Anthony Sinskey ScD ’67 owns several vehicles sporting vanity plates that honor Course 20, which is today the Department of Biological Engineering but has previously been known by Food Technology, Nutrition and Food Science, and Applied Biological Sciences. Sinskey says he has both “MIT 20” and “MIT XX” plates in Massachusetts and New Hampshire.

At least two MIT couples have had dual vanity plates. Says Laura Kiessling ’83, professor of chemistry: “My plate is ‘SLEX.’ This is the abbreviation for a carbohydrate called sialyl Lewis X. It has many roles, including a role in fertilization (sperm-egg binding). It tends to elicit many different reactions from people asking me what it means. Unless they are scientists, I say that my husband [Ron Raines ’80, professor of biology] gave it to me as an inside joke. My husband’s license plate is ‘PROTEIN.’”

Professor of the practice emerita Marcia Bartusiak of MIT Comparative Media Studies/Writing and her husband, Stephen Lowe PhD ’88, previously shared a pair of related license plates. When the couple lived in Virginia, Lowe working as a mathematician on the structure of spiral galaxies and Bartusiak a young science writer focused on astronomy, they had “SPIRAL” and “GALAXY” plates. Now retired in Massachusetts, while they no longer have registered vanity plates, they’ve named their current vehicles “Redshift” and “Blueshift.”

Still other community members have plates that make a nod to their hobbies — such as Department of Earth, Atmospheric and Planetary Sciences and AeroAstro Professor Sara Seager’s “ICANOE” — or else playfully connect with fellow drivers. Julianna Mullen, communications director in the Plasma Science and Fusion Center, says of her “OMGWHY” plate: “It’s just an existential reminder of the importance of scientific inquiry, especially in traffic when someone cuts you off so they can get exactly two car lengths ahead. Oh my God, why did they do it?”

Are you an MIT affiliate with a unique vanity plate? We’d love to see it!

Polina Anikeeva named head of the Department of Materials Science and Engineering

Polina Anikeeva PhD ’09, the Matoula S. Salapatas Professor at MIT, has been named the new head of MIT’s Department of Materials Science and Engineering (DMSE), effective July 1.

“Professor Anikeeva’s passion and dedication as both a researcher and educator, as well as her impressive network of connections across the wider Institute, make her incredibly well suited to lead DMSE,” says Anantha Chandrakasan, chief innovation and strategy officer, dean of engineering, and Vannevar Bush Professor of Electrical Engineering and Computer Science.

In addition to serving as a professor in DMSE, Anikeeva is a professor of brain and cognitive sciences, director of the K. Lisa Yang Brain-Body Center, a member of the McGovern Institute for Brain Research, and associate director of MIT’s Research Laboratory of Electronics.

Anikeeva leads the MIT Bioelectronics Group, which focuses on developing magnetic and optoelectronic tools to study neural communication in health and disease. Her team applies magnetic nanomaterials and fiber-based devices to reveal physiological processes underlying brain-organ communication, with particular focus on gut-brain circuits. Their goal is to develop minimally invasive treatments for a range of neurological, psychiatric, and metabolic conditions.

Anikeeva’s research sits at the intersection of materials chemistry, electronics, and neurobiology. By bridging these disciplines, Anikeeva and her team are deepening our understanding and treatment of complex neurological disorders. Her approach has led to the creation of optoelectronic and magnetic devices that can record neural activity and stimulate neurons during behavioral studies.

Throughout her career, Anikeeva has been recognized with numerous awards for her groundbreaking research. Her honors include receiving an NSF CAREER Award, DARPA Young Faculty Award, and the Pioneer Award from the NIH’s High-Risk, High-Reward Research Program. MIT Technology Review named her one of the 35 Innovators Under 35 and the Vilcek Foundation awarded her the Prize for Creative Promise in Biomedical Science.

Her impact extends beyond the laboratory and into the classroom, where her dedication to education has earned her the Junior Bose Teaching Award, the MacVicar Faculty Fellowship, and an MITx Prize for Teaching and Learning in MOOCs. Her entrepreneurial spirit was acknowledged with a $100,000 prize in the inaugural MIT Faculty Founders Initiative Prize Competition, recognizing her pioneering work in neuroprosthetics.

In 2023, Anikeeva co-founded Neurobionics Inc., which develops flexible fibers that can interface with the brain — opening new opportunities for sensing and therapeutics. The team has presented their technologies at MIT delta v Demo Day and won $50,000 worth of lab space at the LabCentral Ignite Golden Ticket pitch competition. Anikeeva serves as the company’s scientific advisor.

Anikeeva earned her bachelor’s degree in physics at St. Petersburg State Polytechnic University in Russia. She continued her education at MIT, where she received her PhD in materials science and engineering. Vladimir Bulović, director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology, served as Anikeeva’s doctoral advisor. After completing a postdoctoral fellowship at Stanford University, working on devices for optical stimulation and recording of neural activity, Anikeeva returned to MIT as a faculty member in 2011.

Anikeeva succeeds Caroline Ross, the Ford Professor of Engineering, who has served as interim department head since August 2023.

“Thanks to Professor Ross’s steadfast leadership, DMSE has continued to thrive during this period of transition. I’m incredibly grateful for her many contributions and long-standing commitment to strengthening the DMSE community,” adds Chandrakasan.

Study reveals how an anesthesia drug induces unconsciousness

There are many drugs that anesthesiologists can use to induce unconsciousness in patients. Exactly how these drugs cause the brain to lose consciousness has been a longstanding question, but MIT neuroscientists have now answered that question for one commonly used anesthesia drug.

Using a novel technique for analyzing neuron activity, the researchers discovered that the drug propofol induces unconsciousness by disrupting the brain’s normal balance between stability and excitability. The drug causes brain activity to become increasingly unstable, until the brain loses consciousness.

“The brain has to operate on this knife’s edge between excitability and chaos.” – Earl K. Miller

“It’s got to be excitable enough for its neurons to influence one another, but if it gets too excitable, it spins off into chaos. Propofol seems to disrupt the mechanisms that keep the brain in that narrow operating range,” says Earl K. Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

The new findings, reported today in Neuron, could help researchers develop better tools for monitoring patients as they undergo general anesthesia.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study. MIT graduate student Adam Eisen and MIT postdoc Leo Kozachkov are the lead authors of the paper.

Losing consciousness

Propofol is a drug that binds to GABA receptors in the brain, inhibiting neurons that have those receptors. Other anesthesia drugs act on different types of receptors, and the mechanism for how all of these drugs produce unconsciousness is not fully understood.

Miller, Fiete, and their students hypothesized that propofol, and possibly other anesthesia drugs, interfere with a brain state known as “dynamic stability.” In this state, neurons have enough excitability to respond to new input, but the brain is able to quickly regain control and prevent them from becoming overly excited.

Woman gestures with her hand in front of a glass wall with equations written on it.
Ila Fiete in her lab at the McGovern Institute. Photo: Steph Stevens

Previous studies of how anesthesia drugs affect this balance have found conflicting results: Some suggested that during anesthesia, the brain shifts toward becoming too stable and unresponsive, which leads to loss of consciousness. Others found that the brain becomes too excitable, leading to a chaotic state that results in unconsciousness.

Part of the reason for these conflicting results is that it has been difficult to accurately measure dynamic stability in the brain. Measuring dynamic stability as consciousness is lost would help researchers determine if unconsciousness results from too much stability or too little stability.

In this study, the researchers analyzed electrical recordings made in the brains of animals that received propofol over an hour-long period, during which they gradually lost consciousness. The recordings were made in four areas of the brain that are involved in vision, sound processing, spatial awareness, and executive function.

These recordings covered only a tiny fraction of the brain’s overall activity, so to overcome that, the researchers used a technique called delay embedding. This technique allows researchers to characterize dynamical systems from limited measurements by augmenting each measurement with measurements that were recorded previously.

Using this method, the researchers were able to quantify how the brain responds to sensory inputs, such as sounds, or to spontaneous perturbations of neural activity.

In the normal, awake state, neural activity spikes after any input, then returns to its baseline activity level. However, once propofol dosing began, the brain started taking longer to return to its baseline after these inputs, remaining in an overly excited state. This effect became more and more pronounced until the animals lost consciousness.

This suggests that propofol’s inhibition of neuron activity leads to escalating instability, which causes the brain to lose consciousness, the researchers say.

Better anesthesia control

To see if they could replicate this effect in a computational model, the researchers created a simple neural network. When they increased the inhibition of certain nodes in the network, as propofol does in the brain, network activity became destabilized, similar to the unstable activity the researchers saw in the brains of animals that received propofol.

“We looked at a simple circuit model of interconnected neurons, and when we turned up inhibition in that, we saw a destabilization. So, one of the things we’re suggesting is that an increase in inhibition can generate instability, and that is subsequently tied to loss of consciousness,” Eisen says.

As Fiete explains, “This paradoxical effect, in which boosting inhibition destabilizes the network rather than silencing or stabilizing it, occurs because of disinhibition. When propofol boosts the inhibitory drive, this drive inhibits other inhibitory neurons, and the result is an overall increase in brain activity.”

The researchers suspect that other anesthetic drugs, which act on different types of neurons and receptors, may converge on the same effect through different mechanisms — a possibility that they are now exploring.

If this turns out to be true, it could be helpful to the researchers’ ongoing efforts to develop ways to more precisely control the level of anesthesia that a patient is experiencing. These systems, which Miller is working on with Emery Brown, the Edward Hood Taplin Professor of Medical Engineering at MIT, work by measuring the brain’s dynamics and then adjusting drug dosages accordingly, in real-time.

“If you find common mechanisms at work across different anesthetics, you can make them all safer by tweaking a few knobs, instead of having to develop safety protocols for all the different anesthetics one at a time,” Miller says. “You don’t want a different system for every anesthetic they’re going to use in the operating room. You want one that’ll do it all.”

The researchers also plan to apply their technique for measuring dynamic stability to other brain states, including neuropsychiatric disorders.

“This method is pretty powerful, and I think it’s going to be very exciting to apply it to different brain states, different types of anesthetics, and also other neuropsychiatric conditions like depression and schizophrenia,” Fiete says.

The research was funded by the Office of Naval Research, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the National Science Foundation Directorate for Computer and Information Science and Engineering, the Simons Center for the Social Brain, the Simons Collaboration on the Global Brain, the JPB Foundation, the McGovern Institute, and the Picower Institute.