For healthy hearing, timing matters

When soundwaves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Seated man, smiling at camera
McGovern Institute Associate Investigator Josh McDermott. Photo: Justin Knight

Neurons send signals by emitting spikes—brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming soundwaves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The findings, reported December 4, 2024, in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. McGovern Investigator Josh McDermott, who led the research, explains that his team’s models better equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Soundwaves oscillate at rates that determine their pitch: low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that corresponds to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also an associate professor of brain and cognitive sciences at MIT.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally: Animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well—better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

“The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors.” – Josh McDermott

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways—it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”

Personal interests can influence how children’s brains respond to language

A new study from the McGovern Institute shows how interests can modulate language processing in children’s brains and paves the way for personalized brain research.

The paper, which appears in Imaging Neuroscience, was conducted in the lab of McGovern Institute Investigator John Gabrieli, and led by senior author Anila D’Mello, a former McGovern postdoctoral fellow and current assistant professor at the University of Texas Southwestern Medical Center and the University of Texas at Dallas.

“Traditional studies give subjects identical stimuli to avoid confounding the results,” says Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“However, our research tailored stimuli to each child’s interest, eliciting stronger—and more consistent—activity patterns in the brain’s language regions across individuals.” – John Gabrieli

Funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research in MIT’s Yang Tan Collective, this work unveils a new paradigm that challenges current methods and shows how personalization can be a powerful strategy in neuroscience. The paper’s co-first authors are Halie Olson, a postdoctoral associate at the McGovern Institute, and Kristina Johnson, an assistant professor at Northeastern University and former doctoral student at the MIT Media Lab. “Our research integrates participants’ lived experiences into the study design,” says Johnson. “This approach not only enhances the validity of our findings but also captures the diversity of individual perspectives, often overlooked in traditional research.”

Taking interest into account

When it comes to language, our interests are like operators behind the switchboard. They guide what we talk about and who we talk to. Research suggests that interests are also potent motivators and can help improve language skills. For instance, children score higher on reading tests when the material covers topics that are interesting to them.

But neuroscience has shied away from using personal interests to study the brain, especially in the realm of language. This is mainly because interests, which vary between people, could throw a wrench into experimental control—a core principle that drives scientists to limit factors that can muddle the results.

Gabrieli, D’Mello, Olson, and Johnson ventured into this unexplored territory. The team wondered if tailoring language stimuli to children’s interests might lead to higher responses in language regions of the brain. “Our study is unique in its approach to control the kind of brain activity our experiments yield, rather than control the stimuli we give subjects,” says D’Mello. “This stands in stark contrast to most neuroimaging studies that control the stimuli but might introduce differences in each subject’s level of interest in the material.”

Three women posing for photo with brain images in background.
Researchers Halie Olson (left), Kristina Johnson (center), and Anila D’Mello (right). Photo: Caitlin Cunningham

In their recent study, the authors recruited a cohort of 20 children to investigate how personal interests affected the way the brain processes language. Caregivers described their child’s interests to the researchers, spanning baseball, train lines, Minecraft, and musicals. During the study, children listened to audio stories tuned to their unique interests. They were also presented with audio stories about nature (this was not an interest among the children) for comparison. To capture brain activity patterns, the team used functional magnetic resonance imaging (fMRI), which measures changes in blood flow caused by underlying neural activity.

New insights into the brain

“We found that, when children listened to stories about topics they were really interested in, they showed stronger neural responses in language areas than when they listened to generic stories that weren’t tailored to their interests,” says Olson. “Not only does this tell us how interests affect the brain, but it also shows that personalizing our experimental stimuli can have a profound impact on neuroimaging results.”

The researchers noticed a particularly striking result. “Even though the children listened to completely different stories, their brain activation patterns were more overlapping with their peers when they listened to idiosyncratic stories compared to when they listened to the same generic stories about nature,” says D’Mello. This, she notes, points to how interests can boost both the magnitude and consistency of signals in language regions across subjects without changing how these areas communicate with each other.

 

Individual activation maps from three participants showing increased engagement of language regions for personally interesting versus generic narratives. Image courtesy of the researchers.

Gabrieli noted another finding: “In addition to the stronger engagement of language regions for content of interest, there was also stronger activation in brain regions associated with reward and also with self-reflection.” Personal interests are individually relevant and can be rewarding, potentially driving higher activation in these regions during personalized stories.

These personalized paradigms might be particularly well-suited to studies of the brain in unique or neurodivergent populations. Indeed, the team is already applying these methods to study language in the brains of autistic children.

This study breaks new ground in neuroscience and serves as a prototype for future work that personalizes research to unearth further knowledge of the brain. In doing so, scientists can compile a more complete understanding of the type of information that is processed by specific brain circuits and more fully grasp complex functions such as language.

3 Questions: Claire Wang on training the brain for memory sports

On Nov. 10, some of the country’s top memorizers converged on MIT’s Kresge Auditorium to compete in a “Tournament of Memory Champions” in front of a live audience.

The competition was split into four events: long-term memory, words-to-remember, auditory memory, and double-deck of cards, in which competitors must memorize the exact order of two decks of cards. In between the events, MIT faculty who are experts in the science of memory provided short talks and demos about memory and how to improve it. Among the competitors was MIT’s own Claire Wang, a sophomore majoring in electrical engineering and computer science. Wang has competed in memory sports for years, a hobby that has taken her around the world to learn from some of the best memorists on the planet. At the tournament, she tied for first place in the words-to-remember competition.

The event commemorated the 25th anniversary of the USA Memory Championship Organization (USAMC). USAMC sponsored the event in partnership with MIT’s McGovern Institute for Brain Research, the Department of Brain and Cognitive Sciences, the MIT Quest for Intelligence, and the company Lumosity.

MIT News sat down with Wang to learn more about her experience with memory competitions — and see if she had any advice for those of us with less-than-amazing memory skills.

Q: How did you come to get involved in memory competitions?

A: When I was in middle school, I read the book “Moonwalking with Einstein,” which is about a journalist’s journey from average memory to being named memory champion in 2006. My parents were also obsessed with this TV show where people were memorizing decks of cards and performing other feats of memory. I had already known about the concept of “memory palaces,” so I was inspired to explore memory sports. Somehow, I convinced my parents to let me take a gap year after seventh grade, and I travelled the world going to competitions and learning from memory grandmasters. I got to know the community in that time and I got to build my memory system, which was really fun. I did a lot less of those competitions after that year and some subsequent competitions with the USA memory competition, but it’s still fun to have this ability.

Q: What was the Tournament of Memory Champions like?

A: USAMC invited a lot of winners from previous years to compete, which was really cool. It was nice seeing a lot of people I haven’t seen in years. I didn’t compete in every event because I was too busy to do the long-term memory, which takes you two weeks of memorization work. But it was a really cool experience. I helped a bit with the brainstorming beforehand because I know one of the professors running it. We thought about how to give the talks and structure the event.

Then I competed in the words event, which is when they give you 300 words over 15 minutes, and the competitors have to recall each one in order in a round robin competition. You got two strikes. A lot of other competitions just make you write the words down. The round robin makes it more fun for people to watch. I tied with someone else — I made a dumb mistake — so I was kind of sad in hindsight, but being tied for first is still great.

Since I hadn’t done this in a while (and I was coming back from a trip where I didn’t get much sleep), I was a bit nervous that my brain wouldn’t be able to remember anything, and I was pleasantly surprised I didn’t just blank on stage. Also, since I hadn’t done this in a while, a lot of my loci and memory palaces were forgotten, so I had to speed-review them before the competition. The words event doesn’t get easier over time — it’s just 300 random words (which could range from “disappointment” to “chair”) and you just have to remember the order.

Q: What is your approach to improving memory?

A: The whole idea is that we memorize images, feelings, and emotions much better than numbers or random words. The way it works in practice is we make an ordered set of locations in a “memory palace.” The palace could be anything. It could be a campus or a classroom or a part of a room, but you imagine yourself walking through this space, so there’s a specific order to it, and in every location I place certain information. This is information related to what I’m trying to remember. I have pictures I associate with words and I have specific images I correlate with numbers. Once you have a correlated image system, all you need to remember is a story, and then when you recall, you translate that back to the original information.

Doing memory sports really helps you with visualization, and being able to visualize things faster and better helps you remember things better. You start remembering with spaced repetition that you can talk yourself through. Allowing things to have an emotional connection is also important, because you remember emotions better. Doing memory competitions made me want to study neuroscience and computer science at MIT.

The specific memory sports techniques are not as useful in everyday life as you’d think, because a lot of the information we learn is more operative and requires intuitive understanding, but I do think they help in some ways. First, sometimes you have to initially remember things before you can develop a strong intuition later. Also, since I have to get really good at telling a lot of stories over time, I have gotten great at visualization and manipulating objects in my mind, which helps a lot.

Season’s Greetings from the McGovern Institute

For this year’s holiday greeting, we asked the McGovern Institute community what comes to mind when they think of the winter holidays. More than 100 words were submitted for the project. The words were fed into ChatGPT to generate our holiday “prediction.” And a text-to-music generator (Udio) converted the words into a holiday song.

With special thanks to Jarrod Hicks and Jamal Williams from the McDermott lab for the inspiration…and to AI for pushing the boundaries of science and imagination.

Video credits:
Jacob Pryor (animation)
JR Narrows, Space Lute (sound design)

Revisiting reinforcement learning

MIT Institute Professor Ann Graybiel. Photo: Justin Knight

Dopamine is a powerful signal in the brain, influencing our moods, motivations, movements, and more. The neurotransmitter is crucial for reward-based learning, a function that may be disrupted in a number of psychiatric conditions, from mood disorders to addiction. Now, researchers led by Ann Graybiel, an investigator at MIT’s McGovern Institute, have found surprising patterns of dopamine signaling that suggest neuroscientists may need to refine their model of how reinforcement learning occurs in the brain. The team’s findings were published October 14, 2024, in the journal Nature Communications.

Dopamine plays a critical role in teaching people and other animals about the cues and behaviors that portend both positive and negative outcomes; the classic example of this type of learning is the dog that Ivan Pavlov trained to anticipate food at the sound of bell. Graybiel explains that according to the standard model of reinforcement learning, when an animal is exposed to a cue paired with a reward, dopamine-producing cells initially fire in response to the reward. As animals learn the association between the cue and the reward, the timing of dopamine release shifts, so it becomes associated with the cue instead of the reward itself.

But with new tools enabling more detailed analyses of when and where dopamine is released in the brain, Graybiel’s team is finding that this model doesn’t completely hold up. The group started picking up clues that the field’s model of reinforcement learning was incomplete more than ten years ago, when Mark Howe, a graduate student in the lab, noticed that the dopamine signals associated with reward were released not in a sudden burst the moment a reward was obtained, but instead before that, building gradually as a rat got closer to its treat. Dopamine might actually be communicating to the rest of the brain the proximity of the reward, they reasoned. “That didn’t fit at all with the standard, canonical model,” Graybiel says.

Dopamine dynamics

As other neuroscientists considered how a model of reinforcement learning could take those findings into account, Graybiel and postdoctoral researcher Min Jung Kim decided it was time to take a closer look at dopamine dynamics.

“We thought, let’s go back to the most basic kind of experiment and start all over again,” Graybiel says.

That meant using sensitive new dopamine sensors to track the neurotransmitter’s release in the brains of mice as they learned to associated a blue light with a satisfying sip of water. The team focused its attention on the striatum, a region within the brain’s basal ganglia, where neurons use dopamine to influence neural circuits involved in a variety of processes, including reward-based learning.

The researchers found that the timing of dopamine release varied in different parts of the striatum. But nowhere did Graybiel’s team find a transition in dopamine release timing from the time of the reward to the time to the cue—the key transition predicted by the standard model of reinforcement learning model.

In the team’s simplest experiments, where every time a mouse saw a light it was paired with a reward, the lateral part of the striatum reliably released dopamine when animals were given their water. This strong response to the reward never diminished, even as the mice learned to expect the reward when they saw a light. In the medial part of the striatum, in contrast, dopamine was never released at the time of the reward. Cells there always fired when a mouse saw the light, even early in the learning process. This was puzzling, Graybiel says, because at the beginning of learning, dopamine would have been predicted to respond to the reward itself.

The patterns of dopamine release became even more unexpected when Graybiel’s team introduced a second light into its experimental setup. The new light, in a different position than the first, did not signal a reward. Mice watched as either light was given as the cue, one at a time, with water accompanying only the original cue.

In these experiments, when the mice saw the reward-associated light, dopamine release went up in the centromedial striatum and surprisingly, stayed up until the reward was delivered. In the lateral part of the region, dopamine also involved a sustained period where signaling plateaued.

Graybiel says she was surprised to see how much dopamine responses changed when the experimenters introduce the second light. The responses to the rewarded light were different when the other light could be shown in other trials, even though the mice saw only one light at a time. “There must be a cognitive aspect to this that comes into play,” she says. “The brain wants to hold onto the information that the cue has come on for a while.” Cells in the striatum seem to achieve this through the sustained dopamine release that continued during the brief delay between the light and the reward in the team’s experiments. Indeed, Graybiel said, while this kind of sustained dopamine release has not previously been linked to reinforcement learning, it is reminiscent of sustained signaling that has been tied to working memory in other parts of the brain.

Reinforcement learning, reconsidered

Ultimately, Graybiel says, “many of our results didn’t fit reinforcement learning models as traditionally—and by now canonically—considered.” That suggests neuroscientists’ understanding of this process will need to evolve as part of the field’s deepening understanding of the brain. “But this is just one step to help us all refine our understanding and to have reformulations of the models of how basal ganglia influence movement and thought and emotion. These reformulations will have to include surprises about the reinforcement learning system vis-á-vis these plateaus, but they could possibly give us insight into how a single experience can linger in this reinforcement-related part of our brains,” she says.

This study was funded by the National Institutes of Health, the William N. & Bernice E. Bumpus Foundation, the Saks Kavanaugh Foundation, the CHDI Foundation, Joan and Jim Schattinger, and Lisa Yang.

Four from MIT named 2025 Rhodes Scholars

Yiming Chen ’24, Wilhem Hector, Anushka Nair, and David Oluigbo have been selected as 2025 Rhodes Scholars and will begin fully funded postgraduate studies at Oxford University in the U.K. next fall. In addition to MIT’s two U.S. Rhodes winners, Ouigbo and Nair, two affiliates were awarded international Rhodes Scholarships: Chen for Rhodes’ China constituency and Hector for the Global Rhodes Scholarship. Hector is the first Haitian citizen to be named a Rhodes Scholar.

The scholars were supported by Associate Dean Kim Benard and the Distinguished Fellowships team in Career Advising and Professional Development. They received additional mentorship and guidance from the Presidential Committee on Distinguished Fellowships.

“It is profoundly inspiring to work with our amazing students, who have accomplished so much at MIT and, at the same time, thought deeply about how they can have an impact in solving the world’s major challenges,” says Professor Nancy Kanwisher who co-chairs the committee along with Professor Tom Levenson. “These students have worked hard to develop and articulate their vision and to learn to communicate it to others with passion, clarity, and confidence. We are thrilled but not surprised to see so many of them recognized this year as finalists and as winners.

Yiming Chen ’24

Yiming Chen, from Beijing, China, and the Washington area, was named one of four Rhodes China Scholars on Sept 28. At Oxford, she will pursue graduate studies in engineering science, working toward her ongoing goal of advancing AI safety and reliability in clinical workflows.

Chen graduated from MIT in 2024 with a BS in mathematics and computer science and an MEng in computer science. She worked on several projects involving machine learning for health care, and focused her master’s research on medical imaging in the Medical Vision Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

Collaborating with IBM Research, Chen developed a neural framework for clinical-grade lumen segmentation in intravascular ultrasound and presented her findings at the MICCAI Machine Learning in Medical Imaging conference. Additionally, she worked at Cleanlab, an MIT-founded startup, creating an open-source library to ensure the integrity of image datasets used in vision tasks.

Chen was a teaching assistant in the MIT math and electrical engineering and computer science departments, and received a teaching excellence award. She taught high school students at the Hampshire College Summer Studies in Math and was selected to participate in MISTI Global Teaching Labs in Italy.

Having studied the guzheng, a traditional Chinese instrument, since age 4, Chen served as president of the MIT Chinese Music Ensemble, explored Eastern and Western music synergies with the MIT Chamber Music Society, and performed at the United Nations. On campus, she was also active with Asymptones a capella, MIT Ring Committee, Ribotones, Figure Skating Club, and the Undergraduate Association Innovation Committee.

Wilhem Hector

Wilhem Hector, a senior from Port-au-Prince, Haiti, majoring in mechanical engineering, was awarded a Global Rhodes Scholarship on Nov 1. The first Haitian national to be named a Rhodes Scholar, Hector will pursue at Oxford a master’s in energy systems followed by a master’s in education, focusing on digital and social change. His long-term goals are twofold: pioneering Haiti’s renewable energy infrastructure and expanding hands-on opportunities in the country‘s national curriculum.

Hector developed his passion for energy through his research in the MIT Howland Lab, where he investigated the uncertainty of wind power production during active yaw control. He also helped launch the MIT Renewable Energy Clinic through his work on the sources of opposition to energy projects in the U.S. Beyond his research, Hector had notable contributions as an intern at Radia Inc. and DTU Wind Energy Systems, where he helped develop computational wind farm modeling and simulation techniques.

Outside of MIT, he leads the Hector Foundation, a nonprofit providing educational opportunities to young people in Haiti. He has raised over $80,000 in the past five years to finance their initiatives, including the construction of Project Manus, Haiti’s first open-use engineering makerspace. Hector’s service endeavors have been supported by the MIT PKG Center, which awarded him the Davis Peace Prize, the PKG Fellowship for Social Impact, and the PKG Award for Public Service.

Hector co-chairs both the Student Events Board and the Class of 2025 Senior Ball Committee and has served as the social chair for Chocolate City and the African Students Association.

Anushka Nair

Anushka Nair, from Portland, Oregon, will graduate next spring with BS and MEng degrees in computer science and engineering with concentrations in economics and AI. She plans to pursue a DPhil in social data science at the Oxford Internet Institute. Nair aims to develop ethical AI technologies that address pressing societal challenges, beginning with combating misinformation.

For her master’s thesis under Professor David Rand, Nair is developing LLM-powered fact-checking tools to detect nuanced misinformation beyond human or automated capabilities. She also researches human-AI co-reasoning at the MIT Center for Collective Intelligence with Professor Thomas Malone. Previously, she conducted research on autonomous vehicle navigation at Stanford’s AI and Robotics Lab, energy microgrid load balancing at MIT’s Institute for Data, Systems, and Society, and worked with Professor Esther Duflo in economics.

Nair interned in the Executive Office of the Secretary General at the United Nations, where she integrated technology solutions and assisted with launching the High-Level Advisory Body on AI. She also interned in Tesla’s energy sector, contributing to Autobidder, an energy trading tool, and led the launch of a platform for monitoring distributed energy resources and renewable power plants. Her work has earned her recognition as a Social and Ethical Responsibilities of Computing Scholar and a U.S. Presidential Scholar.

Nair has served as President of the MIT Society of Women Engineers and MIT and Harvard Women in AI, spearheading outreach programs to mentor young women in STEM fields. She also served as president of MIT Honors Societies Eta Kappa Nu and Tau Beta Pi.

David Oluigbo

David Oluigbo, from Washington, is a senior majoring in artificial intelligence and decision making and minoring in brain and cognitive sciences. At Oxford, he will undertake an MSc in applied digital health followed by an MSc in modeling for global health. Afterward, Oluigbo plans to attend medical school with the goal of becoming a physician-scientist who researches and applies AI to address medical challenges in low-income countries.

Since his first year at MIT, Oluigbo has conducted neural and brain research with Ev Fedorenko at the McGovern Institute for Brain Research and with Susanna Mierau’s Synapse and Network Development Group at Brigham and Women’s Hospital. His work with Mierau led to several publications and a poster presentation at the Federation of European Societies annual meeting.

In a summer internship at the National Institutes of Health Clinical Center, Oluigbo designed and trained machine-learning models on CT scans for automatic detection of neuroendocrine tumors, leading to first authorship on an International Society for Optics and Photonics conference proceeding paper, which he presented at the 2024 annual meeting. Oluigbo also did a summer internship with the Anyscale Learning for All Laboratory at the MIT Computer Science and Artificial Intelligence Laboratory.

Oluigbo is an EMT and systems administrator officer with MIT-EMS. He is a consultant for Code for Good, a representative on the MIT Schwarzman College of Computing Undergraduate Advisory Group, and holds executive roles with the Undergraduate Association, the MIT Brain and Cognitive Society, and the MIT Running Club.

Illuminating the architecture of the mind

This story also appears in the Winter 2025 issue of BrainScan

___

McGovern investigator Nancy Kanwisher and her team have big questions about the nature of the human mind. Energized by Kanwisher’s enthusiasm for finding out how and why the brain works as it does, her team collaborates broadly and embraces various tools of neuroscience. But their core discoveries tend to emerge from pictures of the brain in action. For Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, “there’s nothing like looking inside.”

Kanwisher and her colleagues have scanned the brains of hundreds of volunteers using functional magnetic resonance imaging (fMRI). With each scan, they collect a piece of insight into how the brain is organized.

Male and female researchers sitting in an imaging center with an MRI in the background.
Nancy Kanwisher (right), whose unfaltering support for students and trainees has earned her awards for outstanding teaching and mentorship, is now working with research scientist RT Pramod to find the brain’s “physics network.” Photo: Steph Stevens

Recognizing faces

By visualizing the parts of the brain that get involved in various mental activities — and, importantly, which do not — they’ve discovered that certain parts of the brain specialize in surprisingly specific tasks. Earlier this year Kanwisher was awarded the prestigious Kavli Prize in Neuroscience for the discovery of one of these hyper-specific regions: a small spot within the brain’s neocortex that recognizes faces.

Kanwisher found that this region, which she named the fusiform face area (FFA), is highly sensitive to images of faces and appears to be largely uninterested in other objects. Without the FFA, the brain struggles with facial recognition — an impairment seen in patients who have experienced damage to this part of the brain.

Beyond the FFA

Not everything in the brain is so specialized. Many areas participate in a range of cognitive processes, and even the most specialized modules, like the FFA, must work with other brain regions to process and use information. Plus, Kanwisher and her team have tracked brain activity during many functions without finding regions devoted exclusively to those tasks. (There doesn’t appear to be a part of the brain dedicated to recognizing snakes, for example).

Still, work in the Kanwisher lab demonstrates that as a specialized functional module within the brain, the FFA is not unique. In collaboration with McGovern colleagues Josh McDermott and Evelina Fedorenko, the group has found areas devoted to perceiving music and using language. There’s even a region dedicated to thinking about other people’s thoughts, identified by Rebecca Saxe in work she started as a graduate student in Kanwisher’s lab.

Brain with colored blobs.
Kanwisher’s team has found several hyperspecific regions of the brain, including those dedicated to using language (red-orange), perceiving music (yellow), thinking about other people’s thoughts (blue), recognizing bodies (green), and our intuitive sense of physics (teal). (This is an artistic adaptation of Kanwisher’s data.)

Having established these regions’ roles, Kanwisher and her collaborators are now looking at how and why they become so specialized. Meanwhile, the group has also turned its attention to a more complex function that seems to largely take place within a defined network: our intuitive sense of physics.

The brain’s game engine

Early in life, we begin to understand the nature of objects and materials, such as the fact that objects can support but not move through each other. Later, we intuitively understand how it feels to move on a slippery floor, what happens when moving objects collide, and where a tossed ball will fall. “You can’t do anything at all in the world without some understanding of the physics of the world you’re acting on,” Kanwisher says.

Kanwisher says MIT colleague Josh Tenenbaum first sparked her interest in intuitive physical reasoning. Tenenbaum and his students had been arguing that humans understand
the physical world using a simulation system, much like the physics engines that video games use to generate realistic movement and interactions within virtual environments. Kanwisher decided to team up with Tenenbaum to test whether there really is a game engine in the head, and if so, what it computes and represents.

An unstable column of blue and yellow blocks piled on top of a table that is half red, half green.
By asking subjects in an MRI scanner to predict which way this block tower might fall, Kanwisher’s team is zeroing in on the location of the brain’s “physics network.” Image: RT Pramod, Nancy Kanwisher

To find out, Kanwisher and her team have asked volunteers to evaluate various scenarios while in an MRI scanner — some that require physical reasoning and some that do not. They found sizable parts of the brain that participate in physical reasoning tasks but stay quiet during other kinds of thinking.

Research scientist RT Pramod says he was initially skeptical the brain would dedicate special circuitry to the diverse tasks involved in our intuitive sense of physics — but he’s been convinced by the data he’s found. “I see consistent evidence that if you’re reasoning, if you’re thinking, or even if you’re looking at anything sort of “physics-y” about the world, you will see activations in these regions and only in these regions — not anywhere else,” he says.

Pramod’s experiments also show that these regions are called on to make predictions about the physical world. When volunteers watch videos of objects whose trajectories portend a crash — but do not actually depict that crash — it is the physics network that signals what is about to happen. “Only these regions have this information, suggesting that maybe there is some truth to the physics engine hypothesis,” Pramod says.

Kanwisher says she doesn’t expect physical reasoning, which her group has tied to sizable swaths of the brain’s frontal and parietal cortex, to be executed by a module as distinct as the FFA. “It’s not going to be like one hyper-specific region and that’s all that happens there,” she says. “I think ultimately it’s much more interesting than that.”

To figure out what these regions can and cannot do, Kanwisher’s team has broadened the ways in which they ask volunteers to think about physics inside the MRI scanner. So far, Kanwisher says, the group’s tests have focused on rigid objects. But what about soft, squishy ones, or liquids?

A red liquid sloshes inside a clear container.
Kanwisher’s team is exploring whether non-rigid materials, like the liquid in this image, engage the brain’s “physics network” in the same way as rigid objects. Image: Vivian Paulun

Vivian Paulun, a postdoc working jointly with Kanwisher and Tenenbaum, is investigating whether our innate expectations about these kinds of materials occur within the network that they have linked to physical reasoning about rigid objects. Another set of experiments will explore whether we use sounds, like that of a bouncing ball or a screeching car, to predict physics physical events with the same network that interprets visual cues.

Meanwhile, she is also excited about an opportunity to find out what happens when the brain’s physics network is damaged. With collaborators in England, the group plans to find out whether patients in which stroke has affected this part of the brain have specific deficits in physical reasoning.

Probing these questions could reveal fundamental truths about the human mind and intelligence. Pramod points out that it could also help advance artificial intelligence, which so far has been unable to match humans when it comes to physical reasoning. “Inferences that are sort of easy for us are still really difficult for even state-of-the art computer vision,” he says. “If we want to get to a stage where we have really good machine learning algorithms that can interact with the world the way we do, I think we should first understand how the brain does it.”

Neuroscientists create a comprehensive map of the cerebral cortex

By analyzing brain scans taken as people watched movie clips, MIT researchers have created the most comprehensive map yet of the functions of the brain’s cerebral cortex.

Using functional magnetic resonance imaging (fMRI) data, the research team identified 24 networks with different functions, which include processing language, social interactions, visual features, and other types of sensory input.

Many of these networks have been seen before but haven’t been precisely characterized using naturalistic conditions. While the new study mapped networks in subjects watching engaging movies, previous works have used a small number of specific tasks or examined correlations across the brain in subjects who were simply resting.

“There’s an emerging approach in neuroscience to look at brain networks under more naturalistic conditions. This is a new approach that reveals something different from conventional approaches in neuroimaging,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s not going to give us all the answers, but it generates a lot of interesting ideas based on what we see going on in the movies that’s related to these network maps that emerge.”

The researchers hope that their new map will serve as a starting point for further study of what each of these networks is doing in the brain.

Desimone and John Duncan, a program leader in the MRC Cognition and Brain Sciences Unit at Cambridge University, are the senior authors of the study, which appears today in Neuron. Reza Rajimehr, a research scientist in the McGovern Institute and a former graduate student at Cambridge University, is the lead author of the paper.

Precise mapping

The cerebral cortex of the brain contains regions devoted to processing different types of sensory information, including visual and auditory input. Over the past few decades, scientists have identified many networks that are involved in this kind of processing, often using fMRI to measure brain activity as subjects perform a single task such as looking at faces.

In other studies, researchers have scanned people’s brains as they do nothing, or let their minds wander. From those studies, researchers have identified networks such as the default mode network, a network of areas that is active during internally focused activities such as daydreaming.

“Up to now, most studies of networks were based on doing functional MRI in the resting-state condition. Based on those studies, we know some main networks in the cortex. Each of them is responsible for a specific cognitive function, and they have been highly influential in the neuroimaging field,” Rajimehr says.

However, during the resting state, many parts of the cortex may not be active at all. To gain a more comprehensive picture of what all these regions are doing, the MIT team analyzed data recorded while subjects performed a more natural task: watching a movie.

“By using a rich stimulus like a movie, we can drive many regions of the cortex very efficiently. For example, sensory regions will be active to process different features of the movie, and high-level areas will be active to extract semantic information and contextual information,” Rajimehr says. “By activating the brain in this way, now we can distinguish different areas or different networks based on their activation patterns.”

The data for this study was generated as part of the Human Connectome Project. Using a 7-Tesla MRI scanner, which offers higher resolution than a typical MRI scanner, brain activity was imaged in 176 people as they watched one hour of movie clips showing a variety of scenes.

The MIT team used a machine-learning algorithm to analyze the activity patterns of each brain region, allowing them to identify 24 networks with different activity patterns and functions.

Some of these networks are located in sensory areas such as the visual cortex or auditory cortex, as expected for regions with specific sensory functions. Other areas respond to features such as actions, language, or social interactions. Many of these networks have been seen before, but this technique offers more precise definition of where the networks are located, the researchers say.

“Different regions are competing with each other for processing specific features, so when you map each function in isolation, you may get a slightly larger network because it is not getting constrained by other processes,” Rajimehr says. “But here, because all the areas are considered together, we are able to define more precise boundaries between different networks.”

The researchers also identified networks that hadn’t been seen before, including one in the prefrontal cortex, which appears to be highly responsive to visual scenes. This network was most active in response to pictures of scenes within the movie frames.

Executive control networks

Three of the networks found in this study are involved in “executive control,” and were most active during transitions between different clips. The researchers also observed that these control networks appear to have a “push-pull” relationship with networks that process specific features such as faces or actions. When networks specific to a particular feature were very active, the executive control networks were mostly quiet, and vice versa.

“Whenever the activations in domain-specific areas are high, it looks like there is no need for the engagement of these high-level networks,” Rajimehr says. “But in situations where perhaps there is some ambiguity and complexity in the stimulus, and there is a need for the involvement of the executive control networks, then we see that these networks become highly active.”

Using a movie-watching paradigm, the researchers are now studying some of the networks they identified in more detail, to identify subregions involved in particular tasks. For example, within the social processing network, they have found regions that are specific to processing social information about faces and bodies. In a new network that analyzes visual scenes, they have identified regions involved in processing memory of places.

“This kind of experiment is really about generating hypotheses for how the cerebral cortex is functionally organized. Networks that emerge during movie watching now need to be followed up with more specific experiments to test the hypotheses. It’s giving us a new view into the operation of the entire cortex during a more naturalistic task than just sitting at rest,” Desimone says.

The research was funded by the McGovern Institute, the Cognitive Science and Technology Council of Iran, the MRC Cognition and Brain Sciences Unit at the University of Cambridge, and a Cambridge Trust scholarship.

A cell protector collaborates with a killer

From early development to old age, cell death is a part of life. Without enough of a critical type of cell death known as apoptosis, animals wind up with too many cells, which can set the stage for cancer or autoimmune disease. But careful control is essential, because when apoptosis eliminates the wrong cells, the effects can be just as dire, helping to drive many kinds of neurodegenerative disease.

Portrait of a scientist
McGovern Investigator Robert Horvitz poses for a photo in his laboratory. Photo: AP Images/Aynsley Floyd

By studying the microscopic roundworm Caenorhabditis elegans—which was honored with its fourth Nobel Prize last month—scientists at MIT’s McGovern Institute have begun to unravel a longstanding mystery about the factors that control apoptosis: how a protein capable of preventing programmed cell death can also promote it. Their study, led by McGovern Investigator Robert Horvitz and reported October 9, 2024, in the journal Science Advances, sheds light on the process of cell death in both health and disease.

“These findings, by graduate student Nolan Tucker and former graduate student, now MIT faculty colleague, Peter Reddien, have revealed that a protein interaction long thought to block apoptosis in C. elegans, likely instead has the opposite effect,” says Horvitz, who shared the 2002 Nobel Prize for discovering and characterizing the genes controlling cell death in C. elegans.

Mechanisms of cell death

Horvitz, Tucker, Reddien and colleagues have provided foundational insights in the field of apoptosis by using C. elegans to analyze the mechanisms that drive apoptosis as well as the mechanisms that determine how cells ensure apoptosis happens when and where it should. Unlike humans and other mammals, which depend on dozens of proteins to control apoptosis, these worms use just a few. And when things go awry, it’s easy to tell: When there’s not enough apoptosis, researchers can see that there are too many cells inside the worms’ translucent bodies. And when there’s too much, the worms lack certain biological functions or, in more extreme cases, can’t reproduce or die during embryonic development.

black and white microscopic image of worms
The nematode worm Caenorhabditis elegans has provided answers to many fundamental questions in biology. Image: Robert Horvitz

Work in the Horvitz lab defined the roles of many of the genes and proteins that control apoptosis in worms. These regulators proved to have counterparts in human cells, and for that reason studies of worms have helped reveal how human cells govern cell death and pointed toward potential targets for treating disease.

A protein’s dual role

Three of C. elegans’ primary regulators of apoptosis actively promote cell death, whereas just one, CED-9, reins in the apoptosis-promoting proteins to keep cells alive. As early as the 1990s, however, Horvitz and colleagues recognized that CED-9 was not exclusively a protector of cells. Their experiments indicated that the protector protein also plays a role in promoting cell death. But while researchers thought they knew how CED-9 protected against apoptosis, its pro-apoptotic role was more puzzling.

CED-9’s dual role means that mutations in the gene that encode it can impact apoptosis in multiple ways. Most ced-9 mutations interfere with the protein’s ability to protect against cell death and result in excess cell death. Conversely, mutations that abnormally activate ced-9 cause too little cell death, just like mutations that inactivate any of the three killer genes.

An atypical ced-9 mutation, identified by Reddien when he was a PhD student in Horvitz’s lab, hinted at how CED-9 promotes cell death. That mutation altered the part of the CED-9 protein that interacts with the protein CED-4, which is proapoptotic. Since the mutation specifically leads to a reduction in apoptosis, this suggested that CED-9 might need to interact with CED-4 to promote cell death.

The idea was particularly intriguing because researchers had long thought that CED-9’s interaction with CED-4 had exactly the opposite effect: In the canonical model, CED-9 anchors CED-4 to cells’ mitochondria, sequestering the CED-4 killer protein and preventing it from associating with and activating another key killer, the CED-3 protein —thereby preventing apoptosis.

To test the hypothesis that CED-9’s interactions with the killer CED-4 protein enhance apoptosis, the team needed more evidence. So graduate student Nolan Tucker used CRISPR gene editing tools to create more worms with mutations in CED-9, each one targeting a different spot in the CED-4-binding region. Then he examined the worms. “What I saw with this particular class of mutations was extra cells and viability,” he says—clear signs that the altered CED-9 was still protecting against cell death, but could no longer promote it. “Those observations strongly supported the hypothesis that the ability to bind CED-4 is needed for the pro-apoptotic function of CED-9,” Tucker explains. Their observations also suggested that, contrary to earlier thinking, CED-9 doesn’t need to bind with CED-4 to protect against apoptosis.

When he looked inside the cells of the mutant worms, Tucker found additional evidence that these mutations prevented CED-9’s ability to interact with CED-4. When both CED-9 and CED-4 are intact, CED-4 appears associated with cells’ mitochondria. But in the presence of these mutations, CED-4 was instead at the edge of the cell nucleus. CED-9’s ability to bind CED-4 to mitochondria appeared to be necessary to promote apoptosis, not to protect against it.

In wild-type worms CED-4 is localized to mitochondria. However, the introduction of CED-9-CED-4 binding mutations such as ced-4(n6703) or ced-9(n6704), causes CED-4 protein to localize to the outer edge of the nucleus. Image: Nolan Tucker, Robert Horvitz

Looking ahead

While the team’s findings begin to explain a long-unanswered question about one of the primary regulators of apoptosis, they raise new ones, as well. “I think that this main pathway of apoptosis has been seen by a lot of people as more or less settled science. Our findings should change that view,” Tucker says.

The researchers see important parallels between their findings from this study of worms and what’s known about cell death pathways in mammals. The mammalian counterpart to CED-9 is a protein called BCL-2, mutations in which can lead to cancer.  BCL-2, like CED-9, can both promote and protect against apoptosis. As with CED-9, the pro-apoptotic function of BCL-2 has been mysterious. In mammals, too, mitochondria play a key role in activating apoptosis. The Horvitz lab’s discovery opens opportunities to better understand how apoptosis is regulated not only in worms but also in humans, and how dysregulation of apoptosis in humans can lead to such disorders as cancer, autoimmune disease and neurodegeneration.

Adults’ brain activity appears unchanged after a year of medical use of cannabis

In a study of adults who use cannabis because they are seeking relief from pain, depression, anxiety, or insomnia, scientists at MIT and Harvard found no changes in brain activity after one year of self-directed use. The study, reported September 18, 2024, in JAMA Network Open, is among the first to investigate how the real-world ways people use cannabis to treat medical symptoms might impact the brain in lasting ways.

While some studies have linked chronic cannabis use to changes in the brain’s structure and function, outcomes vary depending, in part, on how and when people use the substance. People who begin using cannabis during adolescence, while the brain is still developing, may be particularly vulnerable to brain changes. The potency of the products they use and how often they use them matter, too.

Participants in the research, who obtained medical cannabis cards at the outset of the study, tended to choose lower potency products and use them less than daily. This may be why the researchers’ analysis—which focused on the brain activity associated with three kinds of cognitive processes—showed no changes after a year of use.

“For most older adults, occasional cannabis will not dramatically affect brain activation,” says Harvard neuroscientist Jodi Gilman, who led the study. “However, there are some individuals who may be vulnerable to negative effects of cannabis on cognitive function, particularly those using higher potency products more frequently.”

Gilman cautions that in another study of the same medical cannabis users, her team found that the drug failed to alleviate patients’ pain, depression, or anxiety. “So it didn’t help their symptoms—but it wasn’t associated with significant changes in brain activation,” she says. She also cautioned that some adults in the study did develop problems with cannabis use, including cannabis use disorder.

Medical cannabis programs are currently established in 38 U.S. states and Washington, D.C., increasing access to a substance that many people hope might help them relieve distressing medical symptoms. But little is known about how this type of cannabis use affects neural circuits in the brain. “Cannabis has been legalized through ballot initiatives and by legislatures. Dispensary cannabis has not been tested through large, randomized, double-blind clinical trials,” Gilman says. With McGovern Principal Research Scientist Satrajit Ghosh and MD-PhD student Debbie Burdinski, she set out to see what neuroimaging data would reveal about the impacts of this type of cannabis use.

Participants in their study were all adults seeking relief from depression, anxiety, pain, or insomnia who, prior to obtaining their medical cannabis cards, had never used cannabis at high frequencies. The researchers wanted their study to reflect the ways people really use cannabis, so participants were free choose which types of products they used, as well as how much and how often. “We told people, “Get what you want, use it you as you wish, and we’re going to look at how it may affect the brain,” Gilman explains.

Participants reported using a variety of products, but generally, they tended to choose low-potency products. Their frequency of use also varied, from less than once a month to once or more each day. Fewer than 20 percent of participants were daily users.

At the start of the study and again one year later, the research team used functional MRI scans to watch what happened in the brain while participants used three key cognitive skills: working memory, inhibitory control, and reward processing. The activity revealed on the scans showed the researchers which parts of the brain were working to perform these tasks.

Alterations in activity patterns could indicate changes in brains function. But in the 54 participants who underwent both brain scans, Gilman, Ghosh, and Burdinski found that after one year of cannabis use, brain activity during these three cognitive tasks was unchanged. Burdinski notes that many facets of cognition were not followed in the study, so some changes to brain activity could have occurred without being evident in the team’s data.

The researchers acknowledge that their study cohort, whose members were mostly female, middle-aged, and well educated, was less diverse than the population of people who use cannabis for medical symptoms. In fact, Gilman says, groups that are most vulnerable to negative consequences of cannabis may not have been well represented in the study, and it’s possible that a study of a different subgroup would have found different results.

Ghosh points out that there is still a lot to learn about the impact of cannabis, and larger studies are needed to understand its effects on the brain, including how it impacts different populations. For some individuals, he stresses, its use can have severe, debilitating effects, including symptoms of psychosis, delusions, or cannabinoid hyperemesis syndrome.

Larger studies are needed to understand cannabis’s effects on the brain and how it impacts different populations, Ghosh says. “Science can help us understand how we should be thinking about the impact of various substances or various interventions on the brain, instead of just anecdotal considerations of how they work,” he says. “Maybe there are people for whom there are changes. Now we can start teasing apart those details.”