How one brain circuit encodes memories of both places and events

Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.

A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.

“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”

MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.

An index of memories

To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.

In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.

“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” says Fiete, who is also the director of the K. Lisa Yang ICoN Center at MIT. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”

Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.

An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.

In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.

“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.

In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.

When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.

“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”

Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.

Modeling memory cliffs and palaces

The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.

While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.

The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.

The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.

The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.

The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.

Feng Zhang awarded 2024 National Medal of Technology

This post is adapted from an MIT News story.

***

Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and an Investigator at the McGovern Institute, has won the National Medal of Technology and Innovation, the nation’s highest recognition for scientists and engineers. The prestigious award recognizes “American innovators whose vision, intellect, creativity, and determination have strengthened America’s economy and improved our quality of life.”

Zhang, who is also a professor of brain and cognitive sciences and biological engineering at MIT, a core member of the Broad Institute of MIT and Harvard, and an investigator with the Howard Hughes Medical Institute, was recognized for his work developing molecular tools, including the CRISPR genome-editing system, that have accelerated biomedical research and led to the first FDA-approved gene editing therapy.

This year, the White House awarded the National Medal of Science to 14 recipients and named nine individual awardees of the National Medal of Technology and Innovation, along with two organizations. Zhang is among four MIT faculty members who were awarded the nation’s highest honors for exemplary achievement and leadership in science and technology.

Designing molecular tools

Zhang, who earned his undergraduate degree from Harvard University in 2004, has contributed to the development of multiple molecular tools to accelerate the understanding of human disease. While a graduate student at Stanford University, from which he received his PhD in 2009, Zhang worked in the lab of Professor Karl Deisseroth. There, he worked on a protein called channelrhodopsin, which he and Deisseroth believed held potential for engineering mammalian cells to respond to light.

The resulting technique, known as optogenetics, is now used widely used in neuroscience and other fields. By engineering neurons to express light-sensitive proteins such as channelrhodopsin, researchers can either stimulate or silence the cells’ electrical impulses by shining different wavelengths of light on them. This has allowed for detailed study of the roles of specific populations of neurons in the brain, and the mapping of neural circuits that control a variety of behaviors.

In 2011, about a month after joining the MIT faculty, Zhang attended a talk by Harvard Medical School Professor Michael Gilmore, who studies the pathogenic bacterium Enteroccocus. The scientist mentioned that these bacteria protect themselves from viruses with DNA-cutting enzymes known as nucleases, which are part of a defense system known as CRISPR.

“I had no idea what CRISPR was, but I was interested in nucleases,” Zhang told MIT News in 2016. “I went to look up CRISPR, and that’s when I realized you might be able to engineer it for use for genome editing.”

In January 2013, Zhang and members of his lab reported that they had successfully used CRISPR to edit genes in mammalian cells. The CRISPR system includes a nuclease called Cas9, which can be directed to cut a specific genetic target by RNA molecules known as guide strands.

Since then, scientists in fields from medicine to plant biology have used CRISPR to study gene function and modify faulty genes that cause disease. More recently, Zhang’s lab has devised many enhancements to the original CRISPR system, such as making the targeting more precise and preventing unintended cuts in the wrong locations. In 2023, the FDA approved Casgevy, a CRISPR gene therapy based on Zhang’s discoveries, for the treatment of sickle cell disease and beta thalassemia.

The National Medal of Technology and Innovation was established in 1980 and is administered for the White House by the U.S. Department of Commerce’s Patent and Trademark Office. The award recognizes those who have made lasting contributions to America’s competitiveness and quality of life and helped strengthen the nation’s technological workforce.

How the brain prevents us from falling

This post is adapted from an MIT research news story.

***

As we navigate the world, we adapt our movement in response to changes in the environment. From rocky terrain to moving escalators, we seamlessly modify our movements to maximize energy efficiency and our reduce risk of falling. The computational principles underlying this phenomenon, however, are not well understood.

In a recent paper published in the journal Nature Communications, MIT researchers proposed a model that explains how humans continuously adapt yet remain stable during complex tasks like walking.

“Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment,” says senior author Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor of Brain and Cognitive Sciences at MIT. “This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings.”

Barrett Clark, a robotics software engineer at Bright Minds Inc and and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University, are also authors on the paper.

Principles of locomotor adaptation

In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment  more complex.

To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and  developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.

The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing  an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.

The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.

“Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control,” says Seethapathi, who is also an associate investigator at MIT’s McGovern Institute. “You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model  could help speed up the process by narrowing the search space.”

3 Questions: Claire Wang on training the brain for memory sports

On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.

The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.

The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.

MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.

Q: How did you come to get involved in memory competitions?

A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.

Q: What was the Tournament of Memory Champions like?

A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.

Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.

Since I hadn’t done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.

Q: What is your approach to improving memory?

A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.

Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.

The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.

Four from MIT named 2025 Rhodes Scholars

Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Ouigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world’s major challenges,” says Professor Nancy Kanwisher who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.

Yiming Chen ’24

Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.

Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.

Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.

Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.

Wilhem Hector

Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.

Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.

Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.

Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.

Anushka Nair

Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.

For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.

Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.

Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.

David Oluigbo

David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MSc in applied digital health followed by an MSc in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.

Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.

In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.

Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.

Neuroscientists create a comprehensive map of the cerebral cortex

By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.

Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.

Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.

“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that’s related to these network maps that emerge.”

The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.

Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.

Precise mapping

The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.

In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.

“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.

However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.

“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”

The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.

The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.

Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.

“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”

The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.

Executive control networks

Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.

“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”

Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.

“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.

The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.

Brains, fashion, alien life, and more: Highlights from the Cambridge Science Festival

What is it like to give birth on Mars? Can bioengineer TikTok stars win at the video game “Super Smash Brothers” while also answering questions about science? How do sheep, mouse, and human brains compare? These questions and others were asked last month when more than 50,000 visitors from across Cambridge, Massachusetts, and Greater Boston participated in the MIT Museum’s annual Cambridge Science Festival, a week-long celebration dedicated to creativity, ingenuity, and innovation. Running Monday, Sept. 23 through Sunday, Sept. 29, the 2024 edition was the largest in its history, with a dizzyingly diverse program spanning more than 300 events presented in more than 75 different venues, all free and open to the public.

Presented in partnership with the City of Cambridge and more than 250 collaborators across Greater Boston, this year’s festival comprised a wide range of interactive programs for adults, children, and families, including workshops, demos, keynote lectures, walking tours, professional networking opportunities, and expert panels. Aimed at scientists and non-scientists alike, the festival also collaborated with several local schools to offer visits from an astronaut for middle- and high-school students.

With support from dozens of local organizations, the festival was the first iteration to happen under the new leadership of Michael John Gorman, who was appointed director of the MIT Museum in January and began his position in July.

“A science festival like this has an incredible ability to unite a diverse array of people and ideas, while also showcasing Cambridge as an internationally recognized leader in science, technology, engineering, and math,” says Gorman. “I’m thrilled to have joined an institution that values producing events that foster such a strong sense of community, and was so excited to see the enthusiastic response from the tens of thousands of people who showed up and made the festival such a success.”

The 2024 Cambridge Science Festival was broad in scope, with events ranging from hands-on 3D-printing demos to concerts from the MIT Laptop Ensemble to participatory activities at the MIT Museum’s Maker Hub. This year’s programming also highlighted three carefully curated theme tracks that each encompassed more than 25 associated events:

  1. “For the Win: Games, Puzzles, and the Science of Play” (Thursday) consisted of multiple evening events clustered around Kendall Square.
  2. “Frontiers: A New Era of Space Exploration” (Friday and Saturday) featured programs throughout Boston and was co-curated by The Space Consortium, organizers of Massachusetts Space Week.
  3. “Electric Skin: Wearable Tech and the Future of Fashion” (Saturday) offered both day and evening events at the intersection of science, fabric, and fashion, taking place at The Foundry and co-curated by Boston Fashion Week and Advanced Functional Fabrics of America.

One of the discussions tied to the games-themed “For the Win” track involved artist Jeremy Couillard speaking with MIT Lecturer Mikael Jakobsson about the larger importance of games as a construct for encouraging interpersonal interaction and creating meaningful social spaces. Starting this past summer, the List Visual Arts Center has been the home of Couillard’s first-ever institutional solo exhibition, which centers around “Escape from Lavender Island,” a dystopian third-person, open-world exploration game he released in 2023 on the Steam video-game platform.

For the “Frontiers” space theme, one of the headlining events, “Is Anyone Out There?”, tackled the latest cutting-edge research and theories related to the potential existence of extraterrestrial life. The panel of local astronomers and astrophysicists included Sara Seager, the Class of 1941 Professor of Planetary Science, professor of physics, and professor of aeronautics and astronautics at MIT; Kim Arcand, an expert in astronomic visualization at the Harvard-Smithsonian Center for Astrophysics; and Michael Hecht, a research scientist and associate director of research management at MIT’s Haystack Observatory. The researchers spoke about the tools they and their peers use to try to search for extraterrestrial life, and what discovering life beyond our planet might mean for humanity.

For the “Electric Skin” fashion track, events spanned a range of topics revolving around the role that technology will play in the future of the field, including sold-out workshops where participants learned how to laser-cut and engineer “structural garments.” A panel looking at generative technologies explored how designers are using AI to spur innovation in their companies. Onur Yüce Gün, director of computational design at New Balance, also spoke on a panel with Ziyuan “Zoey” Zhu from IDEO, MIT Media Lab research scientist and architect Behnaz Farahi, and Fiorenzo Omenetto, principal investigator and director of The Tufts Silk Lab and the Frank C. Doble Professor of Engineering at Tufts University and a professor in the Biomedical Engineering Department and in the Department of Physics at Tufts.

Beyond the three themed tracks, the festival comprised an eclectic mix of interactive events and panels. Cambridge Public Library hosted a “Science Story Slam” with high-school students from 10 different states competing for $5,000 in prize money. Entrants shared 5-minute-long stories about their adventures in STEM, with topics ranging from probability to “astro-agriculture.” Judges included several MIT faculty and staff, as well as New York Times national correspondent Kate Zernike.

Elsewhere, the MIT Museum’s Gorman moderated a discussion on AI and democracy that included Audrey Tang, the former minister of digital affairs of Taiwan. The panelists explored how AI tools could combat the polarization of political discourse and increase participation in democratic processes, particularly for marginalized voices. Also in the MIT Museum, the McGovern Institute for Brain Research organized a “Decoding the Brain” event with demos involving real animal brains, while the Broad Institute of MIT and Harvard ran a “Discovery After Dark” event to commemorate the institute’s 20th anniversary. Sunday’s Science Carnival featured more than 100 demos, events, and activities, including the ever-popular “Robot Petting Zoo.”

When it first launched in 2007, the Cambridge Science Festival was by many accounts the first large-scale event of its kind across the entire United States. Similar festivals have since popped up all over the country, including the World Science Festival in New York City, the USA Science and Engineering Festival in Washington, the North Carolina Science Festival in Chapel Hill, and the San Diego Festival of Science and Engineering.

More information about the festival is available online, including opportunities to participate in next year’s events.

Brain pathways that control dopamine release may influence motor control

Within the human brain, movement is coordinated by a brain region called the striatum, which sends instructions to motor neurons in the brain. Those instructions are conveyed by two pathways, one that initiates movement (“go”) and one that suppresses it (“no-go”).

In a new study, MIT researchers have discovered an additional two pathways that arise in the striatum and appear to modulate the effects of the go and no-go pathways. These newly discovered pathways connect to dopamine-producing neurons in the brain — one stimulates dopamine release and the other inhibits it.

By controlling the amount of dopamine in the brain via clusters of neurons known as striosomes, these pathways appear to modify the instructions given by the go and no-go pathways. They may be especially involved in influencing decisions that have a strong emotional component, the researchers say.

“Among all the regions of the striatum, the striosomes alone turned out to be able to project to the dopamine-containing neurons, which we think has something to do with motivation, mood, and controlling movement,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Iakovos Lazaridis, a research scientist at the McGovern Institute, is the lead author of the paper, which appears today in the journal Current Biology.

New pathways

Graybiel has spent much of her career studying the striatum, a structure located deep within the brain that is involved in learning and decision-making, as well as control of movement.

Within the striatum, neurons are arranged in a labyrinth-like structure that includes striosomes, which Graybiel discovered in the 1970s. The classical go and no-go pathways arise from neurons that surround the striosomes, which are known collectively as the matrix. The matrix cells that give rise to these pathways receive input from sensory processing regions such as the visual cortex and auditory cortex. Then, they send go or no-go commands to neurons in the motor cortex.

However, the function of the striosomes, which are not part of those pathways, remained unknown. For many years, researchers in Graybiel’s lab have been trying to solve that mystery.

Their previous work revealed that striosomes receive much of their input from parts of the brain that process emotion. Within striosomes, there are two major types of neurons, classified as D1 and D2. In a 2015 study, Graybiel found that one of these cell types, D1, sends input to the substantia nigra, which is the brain’s major dopamine-producing center.

It took much longer to trace the output of the other set, D2 neurons. In the new Current Biology study, the researchers discovered that those neurons also eventually project to the substantia nigra, but first they connect to a set of neurons in the globus palladus, which inhibits dopamine output. This pathway, an indirect connection to the substantia nigra, reduces the brain’s dopamine output and inhibits movement.

The researchers also confirmed their earlier finding that the pathway arising from D1 striosomes connects directly to the substantia nigra, stimulating dopamine release and initiating movement.

“In the striosomes, we’ve found what is probably a mimic of the classical go/no-go pathways,” Graybiel says. “They’re like classic motor go/no-go pathways, but they don’t go to the motor output neurons of the basal ganglia. Instead, they go to the dopamine cells, which are so important to movement and motivation.”

Emotional decisions

The findings suggest that the classical model of how the striatum controls movement needs to be modified to include the role of these newly identified pathways. The researchers now hope to test their hypothesis that input related to motivation and emotion, which enters the striosomes from the cortex and the limbic system, influences dopamine levels in a way that can encourage or discourage action.

That dopamine release may be especially relevant for actions that induce anxiety or stress. In their 2015 study, Graybiel’s lab found that striosomes play a key role in making decisions that provoke high levels of anxiety; in particular, those that are high risk but may also have a big payoff.

“Ann Graybiel and colleagues have earlier found that the striosome is concerned with inhibiting dopamine neurons. Now they show unexpectedly that another type of striosomal neuron exerts the opposite effect and can signal reward. The striosomes can thus both up- or down-regulate dopamine activity, a very important discovery. Clearly, the regulation of dopamine activity is critical in our everyday life with regard to both movements and mood, to which the striosomes contribute,” says Sten Grillner, a professor of neuroscience at the Karolinska Institute in Sweden, who was not involved in the research.

Another possibility the researchers plan to explore is whether striosomes and matrix cells are arranged in modules that affect motor control of specific parts of the body.

“The next step is trying to isolate some of these modules, and by simultaneously working with cells that belong to the same module, whether they are in the matrix or striosomes, try to pinpoint how the striosomes modulate the underlying function of each of these modules,” Lazaridis says.

They also hope to explore how the striosomal circuits, which project to the same region of the brain that is ravaged by Parkinson’s disease, may influence that disorder.

The research was funded by the National Institutes of Health, the Saks-Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, Jim and Joan Schattinger, the Hock E. Tan and K. Lisa Yang Center for Autism Research, Robert Buxton, the Simons Foundation, the CHDI Foundation, and an Ellen Schapiro and Gerald Axelbaum Investigator BBRF Young Investigator Grant.

Seven with MIT ties elected to National Academy of Medicine for 2024

The National Academy of Medicine recently announced the election of more than 90 members during its annual meeting, including MIT faculty members Matthew Vander Heiden and Fan Wang, along with five MIT alumni.

Election to the National Academy of Medicine (NAM) is considered one of the highest honors in the fields of health and medicine and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Matthew Vander Heiden is the director of the Koch Institute for Integrative Cancer Research at MIT, a Lester Wolfe Professor of Molecular Biology, and a member of the Broad Institute of MIT and Harvard. His research explores how cancer cells reprogram their metabolism to fuel tumor growth and has provided key insights into metabolic pathways that support cancer progression, with implications for developing new therapeutic strategies. The National Academy of Medicine recognized Vander Heiden for his contributions to “the development of approved therapies for cancer and anemia” and his role as a “thought leader in understanding metabolic phenotypes and their relations to disease pathogenesis.”

Vander Heiden earned his MD and PhD from the University of Chicago and completed  his clinical training in internal medicine and medical oncology at the Brigham and Women’s Hospital and the Dana-Farber Cancer Institute. After postdoctoral research at Harvard Medical School, Vander Heiden joined the faculty of the MIT Department of Biology and the Koch Institute in 2010. He is also a practicing oncologist and instructor in medicine at Dana-Farber Cancer Institute and Harvard Medical School.

Fan Wang is a professor of brain and cognitive sciences, an investigator at the McGovern Institute, and director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.  Wang’s research focuses on the neural circuits governing the bidirectional interactions between the brain and body. She is specifically interested in the circuits that control the sensory and emotional aspects of pain and addiction, as well as the sensory and motor circuits that work together to execute behaviors such as eating, drinking, and moving. The National Academy of Medicine has recognized her body of work for “providing the foundational knowledge to develop new therapies to treat chronic pain and movement disorders.”

Before coming to MIT in 2021, Wang obtained her PhD from Columbia University and received her postdoctoral training at the University of California at San Francisco and Stanford University. She became a faculty member at Duke University in 2003 and was later appointed the Morris N. Broad Professor of Neurobiology. Wang is also a member of the American Academy of Arts and Sciences and she continues to make important contributions to the neural mechanisms underlying general anesthesia, pain perception, and movement control.

MIT alumni who were elected to the NAM for 2024 include:

  • Leemore Dafny PhD ’01 (Economics);
  • David Huang ’85 MS ’89  (Electrical Engineering and Computer Science) PhD ’93 Medical Engineering and Medical Physics);
  • Nola M. Hylton ’79 (Chemical Engineering);
  • Mark R. Prausnitz PhD ’94 (Chemical Engineering); and
  • Konstantina M. Stankovic ’92 (Biology and Physics) PhD ’98 (Speech and Hearing Bioscience and Technology)

Established originally as the Institute of Medicine in 1970 by the National Academy of Sciences, the National Academy of Medicine addresses critical issues in health, science, medicine, and related policy and inspires positive actions across sectors.

“This class of new members represents the most exceptional researchers and leaders in health and medicine, who have made significant breakthroughs, led the response to major public health challenges, and advanced health equity,” said National Academy of Medicine President Victor J. Dzau. “Their expertise will be necessary to supporting NAM’s work to address the pressing health and scientific challenges we face today.”

Model reveals why debunking election misinformation often doesn’t work

When an election result is disputed, people who are skeptical about the outcome may be swayed by figures of authority who come down on one side or the other. Those figures can be independent monitors, political figures, or news organizations. However, these “debunking” efforts don’t always have the desired effect, and in some cases, they can lead people to cling more tightly to their original position.

Neuroscientists and political scientists at MIT and the University of California at Berkeley have now created a computational model that analyzes the factors that help to determine whether debunking efforts will persuade people to change their beliefs about the legitimacy of an election. Their findings suggest that while debunking fails much of the time, it can be successful under the right conditions.

For instance, the model showed that successful debunking is more likely if people are less certain of their original beliefs and if they believe the authority is unbiased or strongly motivated by a desire for accuracy. It also helps when an authority comes out in support of a result that goes against a bias they are perceived to hold: for example, Fox News declaring that Joseph R. Biden had won in Arizona in the 2020 U.S. presidential election.

“When people see an act of debunking, they treat it as a human action and understand it the way they understand human actions — that is, as something somebody did for their own reasons,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We’ve used a very simple, general model of how people understand other people’s actions, and found that that’s all you need to describe this complex phenomenon.”

The findings could have implications as the United States prepares for the presidential election taking place on Nov. 5, as they help to reveal the conditions that would be most likely to result in people accepting the election outcome.

MIT graduate student Setayesh Radkani is the lead author of the paper, which appears today in a special election-themed issue of the journal PNAS Nexus. Marika Landau-Wells PhD ’18, a former MIT postdoc who is now an assistant professor of political science at the University of California at Berkeley, is also an author of the study.

Modeling motivation

In their work on election debunking, the MIT team took a novel approach, building on Saxe’s extensive work studying “theory of mind” — how people think about the thoughts and motivations of other people.

As part of her PhD thesis, Radkani has been developing a computational model of the cognitive processes that occur when people see others being punished by an authority. Not everyone interprets punitive actions the same way, depending on their previous beliefs about the action and the authority. Some may see the authority as acting legitimately to punish an act that was wrong, while others may see an authority overreaching to issue an unjust punishment.

Last year, after participating in an MIT workshop on the topic of polarization in societies, Saxe and Radkani had the idea to apply the model to how people react to an authority attempting to sway their political beliefs. They enlisted Landau-Wells, who received her PhD in political science before working as a postdoc in Saxe’s lab, to join their effort, and Landau suggested applying the model to debunking of beliefs regarding the legitimacy of an election result.

The computational model created by Radkani is based on Bayesian inference, which allows the model to continually update its predictions of people’s beliefs as they receive new information. This approach treats debunking as an action that a person undertakes for his or her own reasons. People who observe the authority’s statement then make their own interpretation of why the person said what they did. Based on that interpretation, people may or may not change their own beliefs about the election result.

Additionally, the model does not assume that any beliefs are necessarily incorrect or that any group of people is acting irrationally.

“The only assumption that we made is that there are two groups in the society that differ in their perspectives about a topic: One of them thinks that the election was stolen and the other group doesn’t,” Radkani says. “Other than that, these groups are similar. They share their beliefs about the authority — what the different motives of the authority are and how motivated the authority is by each of those motives.”

The researchers modeled more than 200 different scenarios in which an authority attempts to debunk a belief held by one group regarding the validity of an election outcome.

Each time they ran the model, the researchers altered the certainty levels of each group’s original beliefs, and they also varied the groups’ perceptions of the motivations of the authority. In some cases, groups believed the authority was motivated by promoting accuracy, and in others they did not. The researchers also altered the groups’ perceptions of whether the authority was biased toward a particular viewpoint, and how strongly the groups believed in those perceptions.

Building consensus

In each scenario, the researchers used the model to predict how each group would respond to a series of five statements made by an authority trying to convince them that the election had been legitimate. The researchers found that in most of the scenarios they looked at, beliefs remained polarized and in some cases became even further polarized. This polarization could also extend to new topics unrelated to the original context of the election, the researchers found.

However, under some circumstances, the debunking was successful, and beliefs converged on an accepted outcome. This was more likely to happen when people were initially more uncertain about their original beliefs.

“When people are very, very certain, they become hard to move. So, in essence, a lot of this authority debunking doesn’t matter,” Landau-Wells says. “However, there are a lot of people who are in this uncertain band. They have doubts, but they don’t have firm beliefs. One of the lessons from this paper is that we’re in a space where the model says you can affect people’s beliefs and move them towards true things.”

Another factor that can lead to belief convergence is if people believe that the authority is unbiased and highly motivated by accuracy. Even more persuasive is when an authority makes a claim that goes against their perceived bias — for instance, Republican governors stating that elections in their states had been fair even though the Democratic candidate won.

As the 2024 presidential election approaches, grassroots efforts have been made to train nonpartisan election observers who can vouch for whether an election was legitimate. These types of organizations may be well-positioned to help sway people who might have doubts about the election’s legitimacy, the researchers say.

“They’re trying to train to people to be independent, unbiased, and committed to the truth of the outcome more than anything else. Those are the types of entities that you want. We want them to succeed in being seen as independent. We want them to succeed as being seen as truthful, because in this space of uncertainty, those are the voices that can move people toward an accurate outcome,” Landau-Wells says.

The research was funded, in part, by the Patrick J. McGovern Foundation and the Guggenheim Foundation.