Finding the way

This story also appears in the Fall 2024 issue of BrainScan.

___

When you arrive in a new city, every outing can be an exploration. You may know your way to a few places, but only if you follow a specific route. As you wander around a bit, get lost a few times, and familiarize yourself with some landmarks and where they are relative to each other, your brain develops a cognitive map of the space. You learn how things are laid out, and navigating gets easier.

It takes a lot to generate a useful mental map. “You have to understand the structure of relationships in the world,” says McGovern Investigator Mehrdad Jazayeri. “You need learning and experience to construct clever representations. The advantage is that when you have them, the world is an easier place to deal with.”

Indeed, Jazayeri says, internal models like these are the core of intelligent behavior.

Mehrdad Jazayeri (right) and graduate student Jack Gabel sit inside a rig designed to probe the brain’s ability to solve real-world problems with internal models. Photo: Steph Stevens

Many McGovern scientists see these cognitive maps as windows into their biggest questions about the brain: how it represents the external world, how it lets us learn and adapt, and how it forms and reconstructs memories. Researchers are learning that cells and strategies that the brain uses to understand the layout of a space also help track other kinds of structures in the world, too — from variations in sound to sequences of events. By studying how neurons behave as animals navigate their environments, McGovern researchers also expect to deepen their understanding of other important cognitive functions as well.

Decoding spatial maps

McGovern Investigator Ila Fiete builds theoretical models that help explain how spatial maps are formed in the brain. Previous research has shown that “place cells” and “grid cells” are place-sensitive neurons in the brain’s hippocampus and entorhinal cortex whose firing patterns help an animal map out a space. As an animal becomes familiar with its environment, subsets of these cells become tied to specific locations, firing only when the animal is in them.

Microscopic image of the mouse hippocampus
The brain’s ability to navigate the world is made possible by a brain circuit that includes the hippocampus (above), entorhinal cortex, and retrosplenial cortex. The firing pattern of “grid cells” and “place cells” in this circuit help form mental representations, or cognitive maps, of the external world. These brain regions are also among the first areas to be affected in people with Alzheimer’s, who often have trouble navigating. Image: Qian Chen, Guoping Feng

Fiete’s models have shown how these circuits can integrate information about movement, like signals from the muscles and vestibular system that change as an animal moves around, to calculate and update its estimate of an animal’s position in space. Fiete suspects the cells that do this can use the same strategy to keep track of other kinds of movement or change.

Mapping a space is about understanding where things are in relationship to one another, says Jazayeri, and tracking relationships is useful for modeling many kinds of structure in the world. For example, the hippocampus and entorhinal cortex are also closely linked to episodic memory, which keeps track of the connections between events and experiences.

“These brain areas are thought to be critical for learning relationships,” Jazayeri says.

Navigating virtual worlds

A key feature of cognitive maps is that they enable us to make predictions and respond to new situations without relying on immediate sensory cues. In a study published in Nature this June, Jazayeri and Fiete saw evidence of the brain’s ability to call up an internal model of an abstract domain: they watched neurons in the brain’s entorhinal cortex register a sequence of images, even when they were hidden from view.

Two scientists write equations on a glass wall with a marker.
Ila Fiete and postdoc Sarthak Chandra (right) develop theoretical models to study the brain. Photo: Steph Stevens

We can remember the layout of our home from far away or plan a walk through the neighborhood without stepping outside — so it may come as no surprise that the brain can call up its internal model in the absence of movement or sensory inputs. Indeed, previous research has shown that the circuits that encode physical space also encode abstract spaces like auditory sound sequences. But these experiments were performed in the presence of the stimuli, and Jazayeri and his team wanted to know whether simply imagining movement through an abstract domain may also evoke the same cognitive maps.

To test the entorhinal cortex’s ability to do this, Jazayeri and his team designed an experiment where animals had to “mentally” navigate through a previously explored, but now invisible, sequence of images. Working with Fiete, they found that the neurons that had become responsive to particular images in the visible sequence would also fire when mentally navigating the sequence in which images were hidden from view — suggesting the animal was conjuring a representation of the image in its mind.

Colored dots in the shape of a ring.
Ila Fiete has shown that the brain generates a one-dimensional ring of neural activity that acts as a compass. Here, head direction is indicated by color. Image: Ila Fiete

“You see these neurons in the entorhinal cortex undergo very clear dynamic patterns that are in correspondence with what we think the animal might be thinking at the time,” Jazayeri says. “They are updating themselves without any change out there in the world.”

The team then incorporated their data into a computational model to explore how neural circuits might form a mental model of abstract sequences. Their artificial circuit showed that the external inputs (eg., image sequences) become associated with internal models through a simple associative learning rule in which neurons that fire together, wire together. This model suggests that imagined movement could update the internal representations, and the learned association of these internal representations with external inputs might enable a recall of the corresponding inputs even when they are absent.

More broadly, Fiete’s research on cognitive mapping in the hippocampus is leading to some interesting predictions: “One of the conclusions we’re coming to in my group is that when you reconstruct a memory, the area that’s driving that reconstruction is the entorhinal cortex and hippocampus but the reconstruction may happen in the sensory periphery, using the representations that played a role in experiencing that stimulus in the first place,” Fiete explains. “So when I reconstruct an image, I’m likely using my visual cortex to do that reconstruction, driven by the hippocampal complex.” Signals from the entorhinal cortex to the visual cortex during navigation could help an animal visualize landmarks and find its way, even when those landmarks are not visible in the external world.

Landmark coding

Near the entorhinal cortex is the retrosplenial cortex, another brain area that seems to be important for navigation. It is positioned to integrate visual signals with information about the body’s position and movement through space. Both the retrosplenial cortex and the entorhinal cortex are among the first areas impacted by Alzheimer’s disease; spatial disorientation and navigation difficulties may be consequences of their degeneration.

Researchers suspect the retrosplenial cortex may be key to letting an animal know not just where something is, but also how to get there. McGovern Investigator Mark Harnett explains that to generate a cognitive map that can be used to navigate, an animal must understand not just where objects or other cues are in relationship to itself, but also where they are in relationship to each other.

In a study reported in eLife in 2020, Harnett and colleagues may have glimpsed both of these kinds of representations of space inside the brain. They watched neurons there light up as mice ran on a treadmill and tracked the passage of a virtual environment. As the mice became familiar with the landscape and learned where they were likely to find a reward, activity in the retrosplenial cortex changed.

A scientist looks at a computer monitor and adjusts a small wheel.
Lukas Fischer, a Harnett lab postdoc, operates a rig designed to study how mice navigate a virtual environment. Photo: Justin Knight

“What we found was this representation started off sort of crude and mostly about what the animal was doing. And then eventually it became more about the task, the landscape, and the reward,” Harnett says.

Harnett’s team has since begun investigating how the retrosplenial cortex enables more complex spatial reasoning. They designed an experiment in which mice must understand many spatial relationships to access a treat. The experimental setup requires mice to consider the location of reward ports, the center of their environment, and their own viewing angle. Most of the time, they succeed. “They have to really do some triangulation, and the retrosplenial cortex seems to be critical for that,” Harnett says.

When the team monitored neural activity during the task, they found evidence that when an animal wasn’t quite sure where to go, its brain held on to multiple spatial hypotheses at the same time, until new information ruled one out.

Fiete, who has worked with Harnett to explore how neural circuits can execute this kind of spatial reasoning, points out that Jazayeri’s team has observed similar reasoning in animals that must make decisions based on temporarily ambiguous auditory cues. “In both cases, animals are able to hold multiple hypotheses in mind and do the inference,” she says. “Mark’s found that the retrosplenial cortex contains all the signals necessary to do that reasoning.”

Beyond spatial reasoning

As his team learns more about the how the brain creates and uses cognitive maps, Harnett hopes activity in the retrosplenial cortex will shed light on a fundamental aspect of the brain’s organization. The retrosplenial cortex doesn’t just receive information from the brain’s vision-processing center, it also sends signals back. He suspects these may direct the visual cortex to relay information that is particularly pertinent to forming or using a meaningful cognitive map.

“The brain’s navigation system is a beautiful playground.” – Ila Fiete

This kind of connectivity, where parts of the brain that carry out complex cognitive processing send signals back to regions that handle simpler functions, is common in the brain. Figuring out why is a key pursuit in Harnett’s lab. “I want to use that as a model for thinking about the larger cortical computations, because you see this kind of motif repeated in a lot of ways, and it’s likely key for understanding how learning works,” he says.

Fiete is particularly interested in unpacking the common set of principles that allow cell circuits to generate maps of both our physical environment and our abstract experiences. What is it about this set of brain areas and circuits that, on the one hand, permits specific map-building computations, and, on the other hand, generalizes across physical space and abstract experience?

“The brain’s navigation system is a beautiful playground,” she says, “and an amazing system in which to investigate all of these questions.”

Three MIT professors named 2024 Vannevar Bush Fellows

The U.S. Department of Defense (DoD) has announced three MIT professors among the members of the 2024 class of the Vannevar Bush Faculty Fellowship (VBFF). The fellowship is the DoD’s flagship single-investigator award for research, inviting the nation’s most talented researchers to pursue ambitious ideas that defy conventional boundaries.

Domitilla Del Vecchio, professor of mechanical engineering and the Grover M. Hermann Professor in Health Sciences & Technology; Mehrdad Jazayeri, professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research; and Themistoklis Sapsis, the William I. Koch Professor of Mechanical Engineering and director of the Center for Ocean Engineering are among the 11 university scientists and engineers chosen for this year’s fellowship class. They join an elite group of approximately 50 fellows from previous class years.

“The Vannevar Bush Faculty Fellowship is more than a prestigious program,” said Bindu Nair, director of the Basic Research Office in the Office of the Under Secretary of Defense for Research and Engineering, in a press release. “It’s a beacon for tenured faculty embarking on groundbreaking ‘blue sky’ research.”

Research topics

Each fellow receives up to $3 million over a five-year term to pursue cutting-edge projects. Research topics in this year’s class span a range of disciplines, including materials science, cognitive neuroscience, quantum information sciences, and applied mathematics. While pursuing individual research endeavors, Fellows also leverage the unique opportunity to collaborate directly with DoD laboratories, fostering a valuable exchange of knowledge and expertise.

Del Vecchio, whose research interests include control and dynamical systems theory and systems and synthetic biology, will investigate the molecular underpinnings of analog epigenetic cell memory, then use what they learn to “establish unprecedented engineering capabilities for creating self-organizing and reconfigurable multicellular systems with graded cell fates.”

“With this fellowship, we will be able to explore the limits to which we can leverage analog memory to create multicellular systems that autonomously organize in permanent, but reprogrammable, gradients of cell fates and can be used for creating next-generation tissues and organoids with dramatically increased sophistication,” she says, honored to have been selected.

Jazayeri wants to understand how the brain gives rise to cognitive and emotional intelligence. The engineering systems being built today lack the hallmarks of human intelligence, explains Jazayeri. They neither learn quickly nor generalize their knowledge flexibly. They don’t feel emotions or have emotional intelligence.

Jazayeri plans to use the VBFF award to integrate ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of cognitive and emotional intelligence.

“I’m honored and humbled to be selected and excited to tackle some of the most challenging questions at the intersection of neuroscience and AI,” he says.

“I am humbled to be included in such a select group,” echoes Sapsis, who will use the grant to research new algorithms and theory designed for the efficient computation of extreme event probabilities and precursors, and for the design of mitigation strategies in complex dynamical systems.

Examples of Sapsis’s work include risk quantification for extreme events in human-made systems; climate events, such as heat waves, and their effect on interconnected systems like food supply chains; and also “mission-critical algorithmic problems such as search and path planning operations for extreme anomalies,” he explains.

VBFF impact

Named for Vannevar Bush PhD 1916, an influential inventor, engineer, former professor, and dean of the School of Engineering at MIT, the highly competitive fellowship, formerly known as the National Security Science and Engineering Faculty Fellowship, aims to advance transformative, university-based fundamental research. Bush served as the director of the U.S. Office of Scientific Research and Development, and organized and led American science and technology during World War II.

“The outcomes of VBFF-funded research have transformed entire disciplines, birthed novel fields, and challenged established theories and perspectives,” said Nair. “By contributing their insights to DoD leadership and engaging with the broader national security community, they enrich collective understanding and help the United States leap ahead in global technology competition.”

Four MIT faculty named 2024 HHMI Investigators

The Howard Hughes Medical Institute (HHMI) today announced its 2024 investigators, four of whom hail from the School of Science at MIT: Steven Flavell, Mary Gehring, Mehrad Jazayeri, and Gene-Wei Li.

Four others with MIT ties were also honored: Jonathan Abraham, graduate of the Harvard/MIT MD-PhD Program; Dmitriy Aronov PhD ’10; Vijay Sankaran, graduate of the Harvard/MIT MD-PhD Program; and Steven McCarroll, institute member of the Broad Institute of MIT and Harvard.

Every three years, HHMI selects roughly two dozen new investigators who have significantly impacted their chosen disciplines to receive a substantial and completely discretionary grant. This funding can be reviewed and renewed indefinitely. The award, which totals roughly $11 million per investigator over the next seven years, enables scientists to continue working at their current institution, paying their full salary while providing financial support for researchers to be flexible enough to go wherever their scientific inquiries take them.

Of the almost 1,000 applicants this year, 26 investigators were selected for their ability to push the boundaries of science and for their efforts to create highly inclusive and collaborative research environments.

“When scientists create environments in which others can thrive, we all benefit,” says HHMI president Erin O’Shea. “These newest HHMI Investigators are extraordinary, not only because of their outstanding research endeavors but also because they mentor and empower the next generation of scientists to work alongside them at the cutting edge.”

Steven Flavell

Steven Flavell, associate professor of brain and cognitive sciences and investigator in the Picower Institute for Learning and Memory, seeks to uncover the neural mechanisms that generate the internal states of the brain, for example, different motivational and arousal states. Working in the model organism, the C. elegans worm, the lab has used genetic, systems, and computational approaches to relate neural activity across the brain to precise features of the animal’s behavior. In addition, they have mapped out the anatomical and functional organization of the serotonin system, mapping out how it modulates the internal state of C. elegans. As a newly named HHMI Investigator, Flavell will pursue research that he hopes will build a foundational understanding of how internal states arise and influence behavior in nervous systems in general. The work will employ brain-wide neural recordings, computational modeling, expansive research on neuromodulatory system organization, and studies of how the synaptic wiring of the nervous system constrains an animal’s ability to generate different internal states.

“I think that it should be possible to define the basis of internal states in C. elegans in concrete terms,” Flavell says. “If we can build a thread of understanding from the molecular architecture of neuromodulatory systems, to changes in brain-wide activity, to state-dependent changes in behavior, then I think we’ll be in a much better place as a field to think about the basis of brain states in more complex animals.”

Mary Gehring

Mary Gehring, professor of biology and core member and David Baltimore Chair in Biomedical Research at the Whitehead Institute for Biomedical Research, studies how plant epigenetics modulates plant growth and development, with a long-term goal of uncovering the essential genetic and epigenetic elements of plant seed biology. Ultimately, the Gehring Lab’s work provides the scientific foundations for engineering alternative modes of seed development and improving plant resiliency at a time when worldwide agriculture is in a uniquely precarious position due to climate changes.

The Gehring Lab uses genetic, genomic, computational, synthetic, and evolutionary approaches to explore heritable traits by investigating repetitive sequences, DNA methylation, and chromatin structure. The lab primarily uses the model plant A. thaliana, a member of the mustard family and the first plant to have its genome sequenced.

“I’m pleased that HHMI has been expanding its support for plant biology, and gratified that our lab will benefit from its generous support,” Gehring says. “The appointment gives us the freedom to step back, take a fresh look at the scientific opportunities before us, and pursue the ones that most interest us. And that’s a very exciting prospect.”

Mehrdad Jazayeri

Mehrdad Jazayeri, a professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, studies how physiological processes in the brain give rise to the abilities of the mind. Work in the Jazayeri Lab brings together ideas from cognitive science, neuroscience, and machine learning with experimental data in humans, animals, and computer models to develop a computational understanding of how the brain creates internal representations, or models, of the external world.

Before coming to MIT in 2013, Jazayeri received his BS in electrical engineering, majoring in telecommunications, from Sharif University of Technology in Tehran, Iran. He completed his MS in physiology at the University of Toronto and his PhD in neuroscience at New York University.

With his appointment to HHMI, Jazayeri plans to explore how the brain enables rapid learning and flexible behavior — central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

“This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

Gene-Wei Li

Gene-Wei Li, associate professor of biology, has been working on quantifying the amount of proteins cells produce and how protein synthesis is orchestrated within the cell since opening his lab at MIT in 2015.

Li, whose background is in physics, credits the lab’s findings to the skills and communication among his research team, allowing them to explore the unexpected questions that arise in the lab.

For example, two of his graduate student researchers found that the coordination between transcription and translation fundamentally differs between the model organisms E. coli and B. subtilis. In B. subtilis, the ribosome lags far behind RNA polymerase, a process the lab termed “runaway transcription.” The discovery revealed that this kind of uncoupling between transcription and translation is widespread across many species of bacteria, a study that contradicted the long-standing dogma of molecular biology that the machinery of protein synthesis and RNA polymerase work side-by-side in all bacteria.

The support from HHMI enables Li and his team the flexibility to pursue the basic research that leads to discoveries at their discretion.

“Having this award allows us to be bold and to do things at a scale that wasn’t possible before,” Li says. “The discovery of runaway transcription is a great example. We didn’t have a traditional grant for that.”

Mehrdad Jazayeri selected as an HHMI investigator

The Howard Hughes Medical Institute (HHMI) has named McGovern Institute neuroscientist Mehrdad Jazayeri as one of 26 new HHMI investigators—a group of visionary scientists who HHMI will support with more than $300 million over the next seven years.

Support from HHMI is intended to give its investigators, who work at institutions across the United States, the time and resources they need to push the boundaries of the biological sciences. Jazayeri, whose work integrates neurobiology with cognitive science and machine learning, plans to use that support to explore how the brain enables rapid learning and flexible behavior—central aspects of intelligence that have been difficult to study using traditional neuroscience approaches.

Jazayeri says he is delighted and honored by the news. “This is a recognition of my lab’s past accomplishments and the promise of the exciting research we want to embark on,” he says. “I am looking forward to engaging with this wonderful community and making new friends and colleagues while we elevate our science to the next level.”

An unexpected path

Jazayeri, who has been an investigator at the McGovern Institute since 2013, has already made a series of groundbreaking discoveries about how physiological processes in the brain give rise to the abilities of the mind. “That’s what we do really well,” he says. “We expose the computational link between abstract mental concepts, like belief, and electrical signals in the brain,” he says.

Jazayeri’s expertise and enthusiasm for this work grew out a curiosity that was sparked unexpectedly several years after he’d abandoned university education. He’d pursued his undergraduate studies in electrical engineering, a path with good job prospects in Iran where he lived. But an undergraduate program at Sharif University of Technology in Tehran left him disenchanted. “It was an uninspiring experience,” he says. “It’s a top university and I went there excited, but I lost interest as I couldn’t think of a personally meaningful application for my engineering skills. So, after my undergrad, I started a string of random jobs, perhaps to search for my passion.”

A few years later, Jazayeri was trying something new, happily living and working at a banana farm near the Caspian Sea. The farm schedule allowed for leisure in the evenings, which he took advantage of by delving into boxes full of books that an uncle regularly sent him from London. The books were an unpredictable, eclectic mix. Jazayeri read them all—and it was those that talked about the brain that most captured his imagination.

Until then, he had never had much interest in biology. But when he read about neurological disorders and how scientists were studying the brain, he was captivated. The subject seemed to merge his inherent interest in philosophy with an analytical approach that he also loved. “These books made me think that you actually can understand this system at a more concrete level…you can put electrodes in the brain and listen to what neurons say,” he says. “It had never even occurred to me to think about those things.”

He wanted to know more. It took time to find a graduate program in neuroscience that would accept a student with his unconventional background, but eventually the University of Toronto accepted him into a master’s program after he crammed for and passed an undergraduate exam testing his knowledge of physiology. From there, he went on to earn a PhD in neuroscience from New York University studying visual perception, followed by a postdoctoral fellowship at the University of Washington where he studied time perception.

In 2013, Jazayeri joined MIT’s Department of Brain and Cognitive Sciences. At MIT, conversations with new colleagues quickly enriched the way he thought about the brain. “It is fascinating to listen to cognitive scientists’ ideas about the mind,” he says. “They have a rich and deep understanding of the mind but the language they use to describe the mind is not the language of the brain. Bridging this gap in language between neuroscience and cognitive science is at the core of research in my lab.”

His lab’s general approach has been to collect data on neural activity from humans and animals as they perform tasks that call on specific aspects of the mind. “We design tasks that are as simple as possible but get at the crux of the problems in cognitive science,” he explains. “Then we build models that help us connect abstract concepts and theories in cognitive science to signals and dynamics of neural activity in the brain.”

It’s an interdisciplinary approach that even calls on many of the engineering approaches that had failed to inspire him as a student. Students and postdocs in the lab bring a diverse set of knowledge and skills, and together the team has made significant contributions to neuroscience, cognitive science, and computational science.

With animals trained to reproduce a rhythm, they’ve shown how neurons adjust the speed of their signals to predict when something will occur, and what happens when the actual timing of a stimulus deviates from the brain’s expectations.

Studies of time interval predictions have also helped the team learn how the brain weighs different pieces of information as it assesses situations and makes decisions. This process, called Bayesian integration, shapes our beliefs and our confidence in those beliefs. “These are really fundamental concepts in cognitive sciences, and we can now say how neurons exactly do that,” he says.

More recently, by teaching animals to navigate a virtual environment, Jazayeri’s team has found activity in the brain that appears to call up a cognitive map of a space even when its features are not visible. The discovery helps reveal how the brain builds internal models and uses them to interact with the world.

A new paradigm

Jazayeri is proud of these achievements. But he knows that when it comes to understanding the power and complexity of cognition, something is missing.

“Two really important hallmarks of cognition are the ability to learn rapidly and generalize flexibly. If somebody can do that, we say they’re intelligent,” he says. It’s an ability we have from an early age. “If you bring a kid a bunch of toys, they don’t need several years of training, they just can play with the toys right away in very creative ways,” he says. In the wild, many animals are similarly adept at problem solving and finding uses for new tools. But when animals are trained for many months on a single task, as typically happens in a lab, they don’t behave as intelligently. “They become like an expert that does one thing well, but they’re no longer very flexible,” he says.

Figuring out how the brain adapts and acts flexibly in real-world situations in going to require a new approach. “What we have done is that we come up with a task, and then change the animal’s brain through learning to match our task,” he says. “What we now want to do is to add a new paradigm to our work, one in which we will devise the task such that it would match the animal’s brain.”

As an HHMI investigator, Jazayeri plans to take advantage of a host of new technologies to study the brain’s involvement in ecologically relevant behaviors. That means moving beyond the virtual scenarios and digital platforms that have been so widespread in neuroscience labs, including his own, and instead letting animals interact with real objects and environments. “The animal will use its eyes and hands to engage with physical objects in the real world,” he says.

To analyze and learn about animals’ behavior, the team plans detailed tracking of hand and eye movements, and even measurements of sensations that are felt through the hands as animals explore objects and work through problems. These activities are expected to engage the entire brain, so the team will broadly record and analyze neural activity.

Designing meaningful experiments and making sense of the data will be a deeply interdisciplinary endeavor, and Jazayeri knows working with a collaborative community of scientists will be essential. He’s looking forward to sharing the enormous amount of relevant data his lab expects to collect with the research community and getting others involved. Likewise, as a dedicated mentor, he is committed to training scientists who will continue and expand the work in the future.

He is enthusiastic about the opportunity to move into these bigger questions about cognition and intelligence, and support from HHMI comes at an opportune moment. “I think we have now built the infrastructure and conceptual frameworks to think about these problems, and technology for recording and tracking animals has developed a great deal, so we can now do more naturalistic experiments,” he says.

His passion for his work is one of many passions in his life. His love for family, friends, and art are just as deep, and making space to experience everything is a lifelong struggle. But he knows his zeal is infectious. “I think my love for science is probably one of the best motivators of people around me,” he says.

Just thinking about a location activates mental maps in the brain

As you travel your usual route to work or the grocery store, your brain engages cognitive maps stored in your hippocampus and entorhinal cortex. These maps store information about paths you have taken and locations you have been to before, so you can navigate whenever you go there.

New research from MIT has found that such mental maps also are created and activated when you merely think about sequences of experiences, in the absence of any physical movement or sensory input. In an animal study, the researchers found that the entorhinal cortex harbors a cognitive map of what animals experience while they use a joystick to browse through a sequence of images. These cognitive maps are then activated when thinking about these sequences, even when the images are not visible.

This is the first study to show the cellular basis of mental simulation and imagination in a nonspatial domain through activation of a cognitive map in the entorhinal cortex.

“These cognitive maps are being recruited to perform mental navigation, without any sensory input or motor output. We are able to see a signature of this map presenting itself as the animal is going through these experiences mentally,” says Mehrdad Jazayeri, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

McGovern Institute Research Scientist Sujaya Neupane is the lead author of the paper, which appears today in Nature. Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center, is also an author of the paper.

Mental maps

A great deal of work in animal models and humans has shown that representations of physical locations are stored in the hippocampus, a small seahorse-shaped structure, and the nearby entorhinal cortex. These representations are activated whenever an animal moves through a space that it has been in before, just before it traverses the space, or when it is asleep.

“Most prior studies have focused on how these areas reflect the structures and the details of the environment as an animal moves physically through space,” Jazayeri says. “When an animal moves in a room, its sensory experiences are nicely encoded by the activity of neurons in the hippocampus and entorhinal cortex.”

In the new study, Jazayeri and his colleagues wanted to explore whether these cognitive maps are also built and then used during purely mental run-throughs or imagining of movement through nonspatial domains.

To explore that possibility, the researchers trained animals to use a joystick to trace a path through a sequence of images (“landmarks”) spaced at regular temporal intervals. During the training, the animals were shown only a subset of pairs of images but not all the pairs. Once the animals had learned to navigate through the training pairs, the researchers tested if animals could handle the new pairs they had never seen before.

One possibility is that animals do not learn a cognitive map of the sequence, and instead solve the task using a memorization strategy. If so, they would be expected to struggle with the new pairs. Instead, if the animals were to rely on a cognitive map, they should be able to generalize their knowledge to the new pairs.

“The results were unequivocal,” Jazayeri says. “Animals were able to mentally navigate between the new pairs of images from the very first time they were tested. This finding provided strong behavioral evidence for the presence of a cognitive map. But how does the brain establish such a map?”

To address this question, the researchers recorded from single neurons in the entorhinal cortex as the animals performed this task. Neural responses had a striking feature: As the animals used the joystick to navigate between two landmarks, neurons featured distinctive bumps of activity associated with the mental representation of the intervening landmarks.

“The brain goes through these bumps of activity at the expected time when the intervening images would have passed by the animal’s eyes, which they never did,” Jazayeri says. “And the timing between these bumps, critically, was exactly the timing that the animal would have expected to reach each of those, which in this case was 0.65 seconds.”

The researchers also showed that the speed of the mental simulation was related to the animals’ performance on the task: When they were a little late or early in completing the task, their brain activity showed a corresponding change in timing. The researchers also found evidence that the mental representations in the entorhinal cortex don’t encode specific visual features of the images, but rather the ordinal arrangement of the landmarks.

A model of learning

To further explore how these cognitive maps may work, the researchers built a computational model to mimic the brain activity that they found and demonstrate how it could be generated. They used a type of model known as a continuous attractor model, which was originally developed to model how the entorhinal cortex tracks an animal’s position as it moves, based on sensory input.

The researchers customized the model by adding a component that was able to learn the activity patterns generated by sensory input. This model was then able to learn to use those patterns to reconstruct those experiences later, when there was no sensory input.

“The key element that we needed to add is that this system has the capacity to learn bidirectionally by communicating with sensory inputs. Through the associational learning that the model goes through, it will actually recreate those sensory experiences,” Jazayeri says.

The researchers now plan to investigate what happens in the brain if the landmarks are not evenly spaced, or if they’re arranged in a ring. They also hope to record brain activity in the hippocampus and entorhinal cortex as the animals first learn to perform the navigation task.

“Seeing the memory of the structure become crystallized in the mind, and how that leads to the neural activity that emerges, is a really valuable way of asking how learning happens,” Jazayeri says.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, the Québec Research Funds, the National Institutes of Health, and the Paul and Lilah Newton Brain Science Award.

The brain runs an internal simulation to keep track of time

Clocks, computers, and metronomes can keep time with exquisite precision. But even in the absence of an external time keeper, we can track time on our own. We know when minutes or hours have elapsed, and we can maintain a rhythm when we dance, sing, or play music. Now, neuroscientists at the National Autonomous University of Mexico and MIT’s McGovern Institute and have discovered one way the brain keeps a beat: It runs an internal simulation, mentally recreating the perception of an external rhythm and preparing an appropriately timed response.

The discovery, reported January 10, 2024, in the journal Science Advances, illustrates how animals can think about imaginary events and use an internal model to guide their interactions with the world. “It’s a real indication of mental states as an independent driver of behavior,” says neuroscientist Mehrdad Jazayeri, an investigator at the McGovern Institute and an associate professor of brain and cognitive sciences at MIT.

Predicting the future

Jazayeri teamed up with Victor de Lafuente, a neuroscientist at the National Autonomous University of Mexico, to investigate the brain’s time-keeping ability. De Lafuente, who led the study, says they were motivated by curiosity about how the brain makes predictions and prepares for future states of the world.

De Lafuente and his team used a visual metronome to teach monkeys a simple rhythm, showing them a circle that moved between two positions on a screen to set a steady tempo. Then the metronome stopped. After a variable and unpredictable pause, the monkeys were asked to indicate where the dot would be if the metronome had carried on.

Monkeys do well at this task, successfully keeping time after the metronome stops. After the waiting period, they are usually able to identify the expected position of the circle, which they communicate by reaching towards a touchscreen.

To find out how the animals were keeping track of the metronome’s rhythm, de Lafuente’s group monitored their brain activity. In several key brain regions, they found rhythmic patterns of activity that oscillated at the same frequency as the metronome. This occurred while the monkeys watched the metronome. More remarkably, it continued after the metronome had stopped.

“The animal is seeing things going and then things stop. What we find in the brain is the continuation of that process in the animal’s mind,” Jazayeri says. “An entire network is replicating what it was doing.”

That was true in the visual cortex, where clusters of neurons respond to stimuli in specific spots within the eyes’ field of view. One set of cells in the visual cortex fired when the metronome’s circle was on the left of the screen; another set fired when the dot was on the right. As a monkey followed the visual metronome, the researchers could see these cells’ activity alternating rhythmically, tracking the movement. When the metronome stopped, the back-and-forth neural activity continued, maintaining the rhythm. “Once the stimulus was no longer visible, they were seeing the stimulus within their minds,” de Lafuente says.

They found something similar in the brain’s motor cortex, where movements are prepared and executed. De Lafuente explains that the monkeys are motionless for most of their time-keeping task; only when they are asked to indicate where the metronome’s circle should be do they move a hand to touch the screen. But the motor cortex was engaged even before it was time to move. “Within their brains there is a signal that is switching from the left to the right,” he says. “So the monkeys are thinking ‘left, right, left, right’—even when they are not moving and the world is constant.”

While some scientists have proposed that the brain may have a central time-keeping mechanism, the team’s findings indicate that entire networks can be called on to track the passage of time. The monkeys’ model of the future was surprisingly explicit, de Lafuente says, representing specific sensory stimuli and plans for movement. “This offers a potential solution to mentally tracking the dynamics in the world, which is to basically think about them in terms of how they actually would have happened,” Jazayeri says.

 

The brain may learn about the world the same way some computational models do

To make our way through the world, our brain must develop an intuitive understanding of the physical world around us, which we then use to interpret sensory information coming into the brain.

How does the brain develop that intuitive understanding? Many scientists believe that it may use a process similar to what’s known as “self-supervised learning.” This type of machine learning, originally developed as a way to create more efficient models for computer vision, allows computational models to learn about visual scenes based solely on the similarities and differences between them, with no labels or other information.

A pair of studies from researchers at the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT offers new evidence supporting this hypothesis. The researchers found that when they trained models known as neural networks using a particular type of self-supervised learning, the resulting models generated activity patterns very similar to those seen in the brains of animals that were performing the same tasks as the models.

The findings suggest that these models are able to learn representations of the physical world that they can use to make accurate predictions about what will happen in that world, and that the mammalian brain may be using the same strategy, the researchers say.

“The theme of our work is that AI designed to help build better robots ends up also being a framework to better understand the brain more generally,” says Aran Nayebi, a postdoc in the ICoN Center. “We can’t say if it’s the whole brain yet, but across scales and disparate brain areas, our results seem to be suggestive of an organizing principle.”

Nayebi is the lead author of one of the studies, co-authored with Rishi Rajalingham, a former MIT postdoc now at Meta Reality Labs, and senior authors Mehrdad Jazayeri, an associate professor of brain and cognitive sciences and a member of the McGovern Institute for Brain Research; and Robert Yang, an assistant professor of brain and cognitive sciences and an associate member of the McGovern Institute. Ila Fiete, director of the ICoN Center, a professor of brain and cognitive sciences, and an associate member of the McGovern Institute, is the senior author of the other study, which was co-led by Mikail Khona, an MIT graduate student, and Rylan Schaeffer, a former senior research associate at MIT.

Both studies will be presented at the 2023 Conference on Neural Information Processing Systems (NeurIPS) in December.

Modeling the physical world

Early models of computer vision mainly relied on supervised learning. Using this approach, models are trained to classify images that are each labeled with a name — cat, car, etc. The resulting models work well, but this type of training requires a great deal of human-labeled data.

To create a more efficient alternative, in recent years researchers have turned to models built through a technique known as contrastive self-supervised learning. This type of learning allows an algorithm to learn to classify objects based on how similar they are to each other, with no external labels provided.

“This is a very powerful method because you can now leverage very large modern data sets, especially videos, and really unlock their potential,” Nayebi says. “A lot of the modern AI that you see now, especially in the last couple years with ChatGPT and GPT-4, is a result of training a self-supervised objective function on a large-scale dataset to obtain a very flexible representation.”

These types of models, also called neural networks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

As the model performs a particular task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain. Previous work from Nayebi and others has shown that self-supervised models of vision generate activity similar to that seen in the visual processing system of mammalian brains.

In both of the new NeurIPS studies, the researchers set out to explore whether self-supervised computational models of other cognitive functions might also show similarities to the mammalian brain. In the study led by Nayebi, the researchers trained self-supervised models to predict the future state of their environment across hundreds of thousands of naturalistic videos depicting everyday scenarios.

“For the last decade or so, the dominant method to build neural network models in cognitive neuroscience is to train these networks on individual cognitive tasks. But models trained this way rarely generalize to other tasks,” Yang says. “Here we test whether we can build models for some aspect of cognition by first training on naturalistic data using self-supervised learning, then evaluating in lab settings.”

Once the model was trained, the researchers had it generalize to a task they call “Mental-Pong.” This is similar to the video game Pong, where a player moves a paddle to hit a ball traveling across the screen. In the Mental-Pong version, the ball disappears shortly before hitting the paddle, so the player has to estimate its trajectory in order to hit the ball.

The researchers found that the model was able to track the hidden ball’s trajectory with accuracy similar to that of neurons in the mammalian brain, which had been shown in a previous study by Rajalingham and Jazayeri to simulate its trajectory — a cognitive phenomenon known as “mental simulation.” Furthermore, the neural activation patterns seen within the model were similar to those seen in the brains of animals as they played the game — specifically, in a part of the brain called the dorsomedial frontal cortex. No other class of computational model has been able to match the biological data as closely as this one, the researchers say.

“There are many efforts in the machine learning community to create artificial intelligence,” Jazayeri says. “The relevance of these models to neurobiology hinges on their ability to additionally capture the inner workings of the brain. The fact that Aran’s model predicts neural data is really important as it suggests that we may be getting closer to building artificial systems that emulate natural intelligence.”

Navigating the world

The study led by Khona, Schaeffer, and Fiete focused on a type of specialized neurons known as grid cells. These cells, located in the entorhinal cortex, help animals to navigate, working together with place cells located in the hippocampus.

While place cells fire whenever an animal is in a specific location, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Groups of grid cells create overlapping lattices of different sizes, which allows them to encode a large number of positions using a relatively small number of cells.

In recent studies, researchers have trained supervised neural networks to mimic grid cell function by predicting an animal’s next location based on its starting point and velocity, a task known as path integration. However, these models hinged on access to privileged information about absolute space at all times — information that the animal does not have.

Inspired by the striking coding properties of the multiperiodic grid-cell code for space, the MIT team trained a contrastive self-supervised model to both perform this same path integration task and represent space efficiently while doing so. For the training data, they used sequences of velocity inputs. The model learned to distinguish positions based on whether they were similar or different — nearby positions generated similar codes, but further positions generated more different codes.

“It’s similar to training models on images, where if two images are both heads of cats, their codes should be similar, but if one is the head of a cat and one is a truck, then you want their codes to repel,” Khona says. “We’re taking that same idea but applying it to spatial trajectories.”

Once the model was trained, the researchers found that the activation patterns of the nodes within the model formed several lattice patterns with different periods, very similar to those formed by grid cells in the brain.

“What excites me about this work is that it makes connections between mathematical work on the striking information-theoretic properties of the grid cell code and the computation of path integration,” Fiete says. “While the mathematical work was analytic — what properties does the grid cell code possess? — the approach of optimizing coding efficiency through self-supervised learning and obtaining grid-like tuning is synthetic: It shows what properties might be necessary and sufficient to explain why the brain has grid cells.”

The research was funded by the K. Lisa Yang ICoN Center, the National Institutes of Health, the Simons Foundation, the McKnight Foundation, the McGovern Institute, and the Helen Hay Whitney Foundation.

Study decodes surprising approach mice take in learning

Neuroscience discoveries ranging from the nature of memory to treatments for disease have depended on reading the minds of mice, so researchers need to truly understand what the rodents’ behavior is telling them during experiments. In a new study that examines learning from reward, MIT researchers deciphered some initially mystifying mouse behavior, yielding new ideas about how mice think and a mathematical tool to aid future research.

The task the mice were supposed to master is simple: Turn a wheel left or right to get a reward and then recognize when the reward direction switches. When neurotypical people play such “reversal learning” games they quickly infer the optimal approach: stick with the direction that works until it doesn’t and then switch right away. Notably, people with schizophrenia struggle with the task. In the new study in PLOS Computational Biology, mice surprised scientists by showing that while they were capable of learning the “win-stay, lose-shift” strategy, they nonetheless refused to fully adopt it.

“It is not that mice cannot form an inference-based model of this environment—they can,” said corresponding author Mriganka Sur, Newton Professor in The Picower Institute for Learning and Memory and MIT’s Department of Brain and Cognitive Sciences (BCS). “The surprising thing is that they don’t persist with it. Even in a single block of the game where you know the reward is 100 percent on one side, every so often they will try the other side.”

While the mouse motif of departing from the optimal strategy could be due to a failure to hold it in memory, said lead author and Sur Lab graduate student Nhat Le, another possibility is that mice don’t commit to the “win-stay, lose-shift” approach because they don’t trust that their circumstances will remain stable or predictable. Instead, they might deviate from the optimal regime to test whether the rules have changed. Natural settings, after all, are rarely stable or predictable.

“I’d like to think mice are smarter than we give them credit for,” Le said.

But regardless of which reason may cause the mice to mix strategies, added co-senior author Mehrdad Jazayeri, Associate Professor in BCS and the McGovern Institute for Brain Research, it is important for researchers to recognize that they do and to be able to tell when and how they are choosing one strategy or another.

“This study highlights the fact that, unlike the accepted wisdom, mice doing lab tasks do not necessarily adopt a stationary strategy and it offers a computationally rigorous approach to detect and quantify such non-stationarities,” he said. “This ability is important because when researchers record the neural activity, their interpretation of the underlying algorithms and mechanisms may be invalid when they do not take the animals’ shifting strategies into account.”

Tracking thinking

The research team, which also includes co-author Murat Yildirim, a former Sur lab postdoc who is now an assistant professor at the Cleveland Clinic Lerner Research Institute, initially expected that the mice might adopt one strategy or the other. They simulated the results they’d expect to see if the mice either adopted the optimal strategy of inferring a rule about the task, or more randomly surveying whether left or right turns were being rewarded. Mouse behavior on the task, even after days, varied widely but it never resembled the results simulated by just one strategy.

To differing, individual extents, mouse performance on the task reflected variance along three parameters: how quickly they switched directions after the rule switched, how long it took them to transition to the new direction, and how loyal they remained to the new direction. Across 21 mice, the raw data represented a surprising diversity of outcomes on a task that neurotypical humans uniformly optimize. But the mice clearly weren’t helpless. Their average performance significantly improved over time, even though it plateaued below the optimal level.

In the task, the rewarded side switched every 15-25 turns. The team realized the mice were using more than one strategy in each such “block” of the game, rather than just inferring the simple rule and optimizing based on that inference. To disentangle when the mice were employing that strategy or another, the team harnessed an analytical framework called a Hidden Markov Model (HMM), which can computationally tease out when one unseen state is producing a result vs. another unseen state. Le likens it to what a judge on a cooking show might do: inferring which chef contestant made which version of a dish based on patterns in each plate of food before them.

Before the team could use an HMM to decipher their mouse performance results, however, they had to adapt it. A typical HMM might apply to individual mouse choices, but here the team modified it to explain choice transitions over the course of whole blocks. They dubbed their modified model the blockHMM. Computational simulations of task performance using the blockHMM showed that the algorithm is able to infer the true hidden states of an artificial agent. The authors then used this technique to show the mice were persistently blending multiple strategies, achieving varied levels of performance.

“We verified that each animal executes a mixture of behavior from multiple regimes instead of a behavior in a single domain,” Le and his co-authors wrote. “Indeed 17/21 mice used a combination of low, medium and high-performance behavior modes.”

Further analysis revealed that the strategies afoot were indeed the “correct” rule inference strategy and a more exploratory strategy consistent with randomly testing options to get turn-by-turn feedback.

Now that the researchers have decoded the peculiar approach mice take to reversal learning, they are planning to look more deeply into the brain to understand which brain regions and circuits are involved. By watching brain cell activity during the task, they hope to discern what underlies the decisions the mice make to switch strategies.

By examining reversal learning circuits in detail, Sur said, it’s possible the team will gain insights that could help explain why people with schizophrenia show diminished performance on reversal learning tasks. Sur added that some people with autism spectrum disorders also persist with newly unrewarded behaviors longer than neurotypical people, so his lab will also have that phenomenon in mind as they investigate.

Yildirim, too, is interested in examining potential clinical connections.

“This reversal learning paradigm fascinates me since I want to use it in my lab with various preclinical models of neurological disorders,” he said. “The next step for us is to determine the brain mechanisms underlying these differences in behavioral strategies and whether we can manipulate these strategies.”

Funding for the study came from The National Institutes of Health, the Army Research Office, a Paul and Lilah Newton Brain Science Research Award, the Massachusetts Life Sciences Initiative, The Picower Institute for Learning and Memory and The JPB Foundation.

Modeling the social mind

Typically, it would take two graduate students to do the research that Setayesh Radkani is doing.

Driven by an insatiable curiosity about the human mind, she is working on two PhD thesis projects in two different cognitive neuroscience labs at MIT. For one, she is studying punishment as a social tool to influence others. For the other, she is uncovering the neural processes underlying social learning — that is, learning from others. By piecing together these two research programs, Radkani is hoping to gain a better understanding of the mechanisms underpinning social influence in the mind and brain.

Radkani lived in Iran for most of her life, growing up alongside her younger brother in Tehran. The two spent a lot of time together and have long been each other’s best friends. Her father is a civil engineer, and her mother is a midwife. Her parents always encouraged her to explore new things and follow her own path, even if it wasn’t quite what they imagined for her. And her uncle helped cultivate her sense of curiosity, teaching her to “always ask why” as a way to understand how the world works.

Growing up, Radkani most loved learning about human psychology and using math to model the world around her. But she thought it was impossible to combine her two interests. Prioritizing math, she pursued a bachelor’s degree in electrical engineering at the Sharif University of Technology in Iran.

Then, late in her undergraduate studies, Radkani took a psychology course and discovered the field of cognitive neuroscience, in which scientists mathematically model the human mind and brain. She also spent a summer working in a computational neuroscience lab at the Swiss Federal Institute of Technology in Lausanne. Seeing a way to combine her interests, she decided to pivot and pursue the subject in graduate school.

An experience leading a project in her engineering ethics course during her final year of undergrad further helped her discover some of the questions that would eventually form the basis of her PhD. The project investigated why some students cheat and how to change this.

“Through this project I learned how complicated it is to understand the reasons that people engage in immoral behavior, and even more complicated than that is how to devise policies and react in these situations in order to change people’s attitudes,” Radkani says. “It was this experience that made me realize that I’m interested in studying the human social and moral mind.”

She began looking into social cognitive neuroscience research and stumbled upon a relevant TED talk by Rebecca Saxe, the John W. Jarve Professor in Brain and Cognitive Sciences at MIT, who would eventually become one of Radkani’s research advisors. Radkani knew immediately that she wanted to work with Saxe. But she needed to first get into the BCS PhD program at MIT, a challenging obstacle given her minimal background in the field.

After two application cycles and a year’s worth of graduate courses in cognitive neuroscience, Radkani was accepted into the program. But to come to MIT, she had to leave her family behind. Coming from Iran, Radkani has a single-entry visa, making it difficult for her to travel outside the U.S. She hasn’t been able to visit her family since starting her PhD and won’t be able to until at least after she graduates. Her visa also limits her research contributions, restricting her from attending conferences outside the U.S. “That is definitely a huge burden on my education and on my mental health,” she says.

Still, Radkani is grateful to be at MIT, indulging her curiosity in the human social mind. And she’s thankful for her supportive family, who she calls over FaceTime every day.

Modeling how people think about punishment

In Saxe’s lab, Radkani is researching how people approach and react to punishment, through behavioral studies and neuroimaging. By synthesizing these findings, she’s developing a computational model of the mind that characterizes how people make decisions in situations involving punishment, such as when a parent disciplines a child, when someone punishes their romantic partner, or when the criminal justice system sentences a defendant. With this model, Radkani says she hopes to better understand “when and why punishment works in changing behavior and influencing beliefs about right and wrong, and why sometimes it fails.”

Punishment isn’t a new research topic in cognitive neuroscience, Radkani says, but in previous studies, scientists have often only focused on people’s behavior in punitive situations and haven’t considered the thought processes that underlie those behaviors. Characterizing these thought processes, though, is key to understanding whether punishment in a situation can be effective in changing people’s attitudes.

People bring their prior beliefs into a punitive situation. Apart from moral beliefs about the appropriateness of different behaviors, “you have beliefs about the characteristics of the people involved, and you have theories about their intentions and motivations,” Radkani says. “All those come together to determine what you do or how you are influenced by punishment,” given the circumstances. Punishers decide a suitable punishment based on their interpretation of the situation, in light of their beliefs. Targets of punishment then decide whether they’ll change their attitude as a result of the punishment, depending on their own beliefs. Even outside observers make decisions, choosing whether to keep or change their moral beliefs based on what they see.

To capture these decision-making processes, Radkani is developing a computational model of the mind for punitive situations. The model mathematically represents people’s beliefs and how they interact with certain features of the situation to shape their decisions. The model then predicts a punisher’s decisions, and how punishment will influence the target and observers. Through this model, Radkani will provide a foundational understanding of how people think in various punitive situations.

Researching the neural mechanisms of social learning

In parallel, working in the lab of Professor Mehrdad Jazayeri, Radkani is studying social learning, uncovering its underlying neural processes. Through social learning, people learn from other people’s experiences and decisions, and incorporate this socially acquired knowledge into their own decisions or beliefs.

Humans are extraordinary in their social learning abilities, however our primary form of learning, shared by all other animals, is learning from self-experience. To investigate how learning from others is similar to or different from learning from our own experiences, Radkani has designed a two-player video game that involves both types of learning. During the game, she and her collaborators in Jazayeri’s lab record neural activity in the brain. By analyzing these neural measurements, they plan to uncover the computations carried out by neural circuits during social learning, and compare those to learning from self-experience.

Radkani first became curious about this comparison as a way to understand why people sometimes draw contrasting conclusions from very similar situations. “For example, if I get Covid from going to a restaurant, I’ll blame the restaurant and say it was not clean,” Radkani says. “But if I hear the same thing happen to my friend, I’ll say it’s because they were not careful.” Radkani wanted to know the root causes of this mismatch in how other people’s experiences affect our beliefs and judgements differently from our own similar experiences, particularly because it can lead to “errors that color the way that we judge other people,” she says.

By combining her two research projects, Radkani hopes to better understand how social influence works, particularly in moral situations. From there, she has a slew of research questions that she’s eager to investigate, including: How do people choose who to trust? And which types of people tend to be the most influential? As Radkani’s research grows, so does her curiosity.

Tracking time in the brain

By studying how primates mentally measure time, scientists at MIT’s McGovern Institute have discovered that the brain runs an internal clock whose speed is set by prior experience. In new experiences, the brain closely tracks how elapsed time intervals differ from its preset expectation—indicating that for the brain, time is relative.

The findings, reported September 15, 2021, in the journal Neuron, help explain how the brain uses past experience to make predictions—a powerful strategy for navigating a complex and ever-changing world. The research was led by McGovern Investigator Mehrdad Jazayeri, who is working to understand how the brain forms internal models of the world.

Internal clock

Sensory information tells us a lot about our environment, but the brain needs more than data, Jazayeri says. Internal models are vital for understanding the relationships between things, making generalizations, and interpreting and acting on our perceptions. They help us focus on what’s most important and make predictions about our surroundings, as well as the consequences of our actions. “To be efficient in learning about the world and interacting with the world, we need those predictions,” Jazayeri says. When we enter a new grocery store, for example, we don’t have to check every aisle for the peanut butter, because we know it is likely to be near the jam. Likewise, an experienced racquetball player knows how the ball will move when her paddle hits it a certain way.

Jazayeri’s team was interested in how the brain might make predictions about time. Previously, his team showed how neurons in the frontal cortex—a part of the brain involved in planning—can tick off the passage of time like a metronome. By training monkeys to use an eye movement to indicate the duration of time that separated two flashes of light, they found that cells that track time during this task cooperate to form an adjustable internal clock. Those cells generate a pattern of activity that can be drawn out to measure long time intervals or compressed to track shorter ones. The changes in these signal dynamics reflect elapsed time so precisely that by monitoring the right neurons, Jazayeri’s team can determine exactly how fast a monkey’s internal clock is running.

Predictive processing

Nicolas Meirhaeghe, a graduate student in Mehrdad Jazayeri’s lab, studies how we plan and perform movements in the face of uncertainty. He is pictured here as part of the McGovern Institute 20th anniversary “Rising Stars” photo series. Photo: Michael Spencer

For their most recent experiments, graduate student Nicolas Meirhaeghe designed a series of experiments in which the delay between the two flashes of light changed as the monkeys repeated the task. Sometimes the flashes were separated by just a fraction of a second, sometimes the delay was a bit longer. He found that the time-keeping activity pattern in the frontal cortex occurred over different time scales as the monkeys came to expect delays of different durations. As the duration of the delay fluctuated, the brain appeared to take all prior experience into account, setting the clock to measure the average of those times in anticipation of the next interval.

The behavior of the neurons told the researchers that as a monkey waited for a new set of light cues, it already had an expectation about how long the delay would be. To make such a prediction, Meirhaeghe says, “the brain has no choice but to use all the different values that you perceive from your experience, average those out, and use this as the expectation.”

By analyzing neuronal behavior during their experiments, Jazayeri and Meirhaeghe determined that the brain’s signals were not encoding the full time elapsed between light cues, but instead how that time differed from the predicted time. Calculating this prediction error enabled the monkeys to report back how much time had elapsed.

Neuroscientists have suspected that this strategy, known as predictive processing, is widely used by the brain—although until now there has been little evidence of it outside early sensory areas. “You have a lot of stimuli that are coming from the environment, but lots of stimuli are actually predictable,” Meirhaeghe says. “The idea is that your brain is learning through experience patterns in the environment, and is subtracting your expectation from the incoming signal. What the brain actually processes in the end is the result of this subtraction.”

Finally, the researchers investigated the brain’s ability to update its expectations about time. After presenting monkeys with delays within a particular time range, they switched without warning to times that fluctuated within a new range. The brain responded quickly, updating its internal clock. “If you look inside the brain, after about 100 trials the monkeys have already figured out that these statistics have changed,” says Jazayeri.

It took longer, however—as many as 1,000 trials—for the monkeys to change their behavior in response to the change. “It seems like this prediction, and updating the internal model about the statistics of the world, is way faster than our muscles are able to implement,” Jazayeri says. “Our motor system is kind of lagging behind what our cognitive abilities tell us.” This makes sense, he says, because not every change in the environment merits a change in behavior. “You don’t want to be distracted by every small thing that deviates from your prediction. You want to pay attention to things that have a certain level of consistency.”