When doctors and scientists want to see inside a body, magnetic resonance imaging (MRI) is a powerful tool. MRI can noninvasively capture detailed images of the body’s muscles, organs, and bones. It can monitor blood flow to generate a map of brain activity. And with new sensors developed by bioengineers at MIT, MRI can track the kinds of molecules that make our brains and bodies work.
In the May 13, 2026, issue of the journal Nature Biomedical Engineering, a team led by Alan Jasanoff, the Eugene McDermott Professor in the Brain Sciences and Human Behavior at MIT reports on their new sensors, which can brighten or dim MRI signals in response to specific molecular targets. The probes are designed to amplify the effect that each target molecule has on MRI signal, dramatically improving sensitivity over previous small-molecule sensors. Jasanoff, who is also an associate investigator at the McGovern Institute for Brain Research, says the approach his team used should enable the development of MRI sensors that detect neurotransmitters and other important molecules in the brain.
“We want to be able to measure distinct chemical signals like neurotransmitters, neuropeptides, and metabolites as they fluctuate across the whole brain,” Jasanoff says.
“These chemicals are important ingredients in neural computations, and we want to use the types of probes that we developed to detect these signals dynamically.”
Engineered nanoparticles
Jasanoff explains that researchers have struggled to use MRI to sensitively detect small molecules in the brain because the amount of any given neurochemical is low. Sensors can be designed to change the brightness of an MRI signal in the presence of specific molecules—but it takes a lot of contrast agent to achieve this. If every molecule of contrast agent needs its own target molecule to activate it, low concentrations of the target molecule limit the sensors’ visibility in an MRI scan. “The signal change that you see in the imaging will be very modest,” Jasanoff says. “It won’t let us detect physiological events.”
The Jasanoff team’s new sensors, whose development was led by postdoctoral researcher Sayani Das and graduate student Jacob Cyert Simon, overcome this problem. To generate a greater signal change in response to target molecules, the researchers designed probes in which a single target molecule impacts not one contrast agent, but many.
To achieve this, Das and Simon packaged an MRI contrast agent inside tiny sacs called liposomal nanoparticles. Each nanoparticle is packed with many molecules of gadolinium, a magnetic material that brightens the MRI signal that arises from hydrogen atoms in water. Inside their protective sacs, gadolinium has no effect on MRI signal, unless water molecules can easily get in and out.
Das and Simon built water channels into the walls of their gadolinium-filled nanoparticles, engineering them so that their opening depends on the presence or absence of a target molecule. When the channels open, more water enters and the gadolinium brightens the local MRI signal, lighting up that spot in a scan.
LisNR architecture consisting of an MRI contrast agent (gadoteridol) enclosed in a liposomal membrane (grey) perforated by water permeable pores (orange). Image courtesy of the researchers.
The researchers call their target-responsive sensors liposomal nanoparticle reporters, or LisNRs (pronounced “listeners”). They designed LisNRs that let water in only in the presence of their target molecule. The water channels in these nanoparticles stay blocked until they encounter their target, which can knock aside a channel-blocking bit of protein. Once the channel blocker is displaced, water enters and MRI signal brightens. They also made LisNRs that dim the MRI signal in the presence of the molecule they are designed to detect. These have a channel that stays open until the target molecule comes along and blocks it, keeping water out. Jasanoff lab members Vinay Sharma, Samira Abozeid, and Gregory Thiabaud played key roles in understanding and optimizing these interactions, and collaborators in the laboratory of Masayuki Inoue at the University of Tokyo helped the group engineer channels with higher potency.
In experiments led by postdoctoral researcher Miranda Dawson, Jasanoff’s team used their LisNRs to detect a molecule called biotin in the brains and bodies of living rats, illustrating the probe’s amplifying effects. “We showed that we could detect micromolar-scale levels of biotin with about tenfold greater sensitivity than we would have if we’d used a more conventional, one-to-one type sensing approach,” Jasanoff says. He adds that the team’s modeling suggests that with further development, they may be able to achieve even greater sensitivity gains.
The group showed that the new sensors can be delivered systemically, reaching various organs and spreading throughout the brain. This makes them promising tools for brain-wide imaging, as well as imaging targets in the peripheral nervous system or other tissues.
A next step will be engineering LisNRs that respond to the specific neurochemicals that Jasanoff and his team hope to study. “There are something like 100 neurochemicals in the brain that we’d love to detect in principle,” he says. They’ll start with dopamine and glutamate—two important and relatively abundant molecules that mediate communications between neurons.
This research, including support for postdoctoral fellows and graduate students involved in the work, was funded in part by Lore Harp McGovern, Yang Tan Collective at MIT, K. Lisa Yang Brain-Body Center at MIT, Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT.
MIT scientists Sven Dorkenwald and Whitney Henry have been named 2026 Searle Scholars, an award given annually to 15 exceptional early-career researchers in the fields of biomedical sciences and chemistry. Chosen by a scientific advisory board, Searle Scholars are considered among the most creative young researchers pursuing high-risk/high-reward research. The Searle Scholars Program is funded through the Searle Funds at The Chicago Community Trust and administered by Kinship Foundation.
Dorkenwald is an assistant professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research. Henry is the Robert A. Swanson (1969) Career Development Professor of Life Sciences and an intramural faculty member at the Koch Institute for Integrative Cancer Research. They will each receive $450,000 in flexible funding to support their work over the next three years.
Sven Dorkenwald
Sven Dorkenwald is a computational neuroscientist investigating the organizational principles of neuronal circuits. The synaptic connectivity of neurons, their connectome, is fundamental to how networks of neurons function. Dorkenwald develops computational and collaborative tools to map, analyze, and interpret synapse-resolution connectomes. His work has led to large connectomic reconstructions of the fruit fly brain and parts of mammalian brains. He uses these connectomes to investigate the architecture of neuronal circuits and how their structure supports complex computations.
“As I establish my new lab, the Searle Scholars Award will help us launch ambitious projects and set our long-term scientific direction,” said Dorkenwald. “I am deeply grateful for the support from the Kinship Foundation and look forward to interacting with this amazing cohort of Searle Scholars.”
Dorkenwald joined the faculty of MIT in 2026 as an assistant professor in the Department of Brain and Cognitive Sciences and an investigator at the McGovern Institute. He earned a BS in physics and an MS in computer engineering from the University of Heidelberg, followed by a PhD in computer science and neuroscience at Princeton University in 2023 under the mentorship of Sebastian Seung and Mala Murthy. Dorkenwald completed his postdoctoral training as a Shanahan Research Fellow at the Allen Institute and the University of Washington, while serving as a Visiting Faculty Researcher at Google Research.
Whitney Henry
Whitney Henry investigates the potential of ferroptosis, an iron-dependent form of cell death, for developing novel therapies that target subpopulations of cancer cells that are highly metastatic, therapy-resistant, and therefore critical instigators of tumor relapse. Her research is focused on uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, drawing from functional genomics, metabolomics, bioengineering, and a range of in vitro and in vivo models.
“I am incredibly grateful to the Kinship Foundation for supporting our research and giving us the freedom to ask bold, curiosity-driven scientific questions,” said Henry. “This support allows us to pursue ambitious ideas, take creative risks, and embark on new research directions.”
Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute, and is currently an HHMI Freeman Hrabowski Scholar. She received her bachelor’s degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked in the lab of Robert Weinberg at the Whitehead Institute and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT.
Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience and head of the Department of Brain and Cognitive Sciences, and Fan Wang, a professor of brain and cognitive sciences, have been elected to join the National Academy of Sciences (NAS). Fee and Wang, who are also investigators at the McGovern Institute for Brain Research, were elected by current NAS members in recognition of their “distinguished and continuing achievements in original research.”
The NAS is a private, nonprofit institution that was established under a congressional charter signed by President Abraham Lincoln in 1863. It recognizes achievement in science by election to membership, and — with the National Academy of Engineering and the National Academy of Medicine — provides science, engineering, and health policy advice to the federal government and other organizations. This year, the NAS elected 120 members and 25 international members, including six MIT faculty, bringing the total number of active members to 2,705.
“Election to the National Academy of Sciences by one’s peers is a great honor for a scientist in the United States,” says McGovern Institute Director Robert Desimone. “Michale and Fan represent the very best of our research community and we are tremendously proud of their accomplishments and this well-deserved recognition.”
Michale Fee’s research explores how the brain learns and generates complex sequential behaviors. Using the zebra finch as a model system, Fee investigates the neural mechanisms underlying birdsong—a behavior that young birds learn from their fathers through trial and error, much as human infants learn to speak through babbling. His work has revealed that a brain region called the higher vocal center (HVC) functions like an orchestra conductor, precisely controlling the tempo and timing of song production. Other work from his lab has shown how this same circuit helps to store a memory of the father’s song, how baby birds babble in order to practice their song, and how this vocal practice is translated to song learning by listening to themselves sing.
These findings extend far beyond birdsong—the neural circuits controlling birdsong learning are closely related to human brain circuits disrupted in Parkinson’s and Huntington’s disease. Insights from Fee’s research could reveal new clues to the causes and potential treatments of these complex brain disorders.
Fee’s appointment in 2021 as head of the Department of Brain and Cognitive Sciences continues the department’s tradition of being led by scientists whose exemplary work makes MIT a world leader in brain science.
Fan Wang investigates the neural circuits that govern the dynamic interactions between brain and body, exploring how the brain generates sensory perceptions and controls movement. Wang, who is also the co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, uses cutting-edge techniques including optogenetics, in vivo electrophysiology, and in vivo imaging, to make discoveries with profound clinical implications.
By developing innovative tools to study how brain circuits work, Wang discovered distinct populations of neurons activated by anesthesia that can suppress pain without blocking sensation, and can calm anxiety by regulating automatic body functions like heart rate. She also identified the brain circuits controlling rhythmic movements essential for exploration and communication. Together, these findings reveal how emotion, physiology, movement, and consciousness are deeply interconnected.
Wang combines rigorous basic neuroscience with a commitment to translating her discoveries into therapies that relieve human suffering. Her election to the NAS recognizes her contributions to understanding the brain-body connection and therapeutic potential of her groundbreaking research.
The formal induction ceremony for new NAS members, during which they sign the ledger whose first signatory is Abraham Lincoln, will be held at the Academy’s annual meeting in Washington D.C. next spring.
Schizophrenia, a complex and variable psychiatric disorder, changes people’s perceptions of reality. People with schizophrenia may hear, see, or sense things that aren’t there, and they often hold firm to mistaken ideas about the world despite strong evidence to the contrary. As if these changes aren’t disruptive enough, they are usually accompanied by cognitive difficulties and disorganized thinking.
Scientists at the McGovern Institute’s Poitras Center for Psychiatric Disorders Research are looking for clues into the origins of the disorder and its symptoms so they can help guide the development of new treatments. Encouragingly, they are beginning to uncover the brain changes that reshape reality for people with schizophrenia.
Genetic clues
Researchers who want to study the root causes of a disease often turn to genetics for clues—and the genetics of schizophrenia are complicated. Hundreds of different genes seem to shape people’s risk of developing the disorder, most of which nudge risk only slightly. For most people, it seems to be the cumulative effect of these genes and how they intersect with other risk factors, like stress and prenatal complications, that determine who develops schizophrenia and who does not.
Gene variants that substantially impact the risk of schizophrenia are expected to reveal more about the underlying biology of the disorder than genes whose individual impact is minor. But these variants are rare, and it took a massive study to find them. In 2022, scientists at the Broad Institute’s Stanley Center for Psychiatric Research reported that after analyzing the DNA of more than 24,000 people with schizophrenia, they had identified mutations in 10 genes that dramatically increased the risk of the disorder.
“I think this is exciting, because for the first time, you can actually have an animal model based onhuman genetics findings,” says McGovern Institute and Stanley Center Investigator Guoping Feng. “You can put these mutations in animal models to try to understand how this mutation affects brain development, circuit formation, circuit function, and behavior.” Feng is also the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT.
Guoping Feng (right) and his postdoctoral researcher Tinting Zhou (left) examine a mouse brain carrying a genetic mutation associated with schizophrenia. Photo: Steph Stevens
In work supported by the Poitras Center, the Stelling Family Research Fund, and the Yang Tan Collective at MIT, Feng’s lab has engineered three strains of mice that carry ultra-rare schizophrenia-associated mutations. Their first significant findings come from mice with a mutation in a gene called Grin2a. People who inherit a dysfunctional Grin2a gene, which neurons need to detect and respond to a signaling molecule called NMDA, are 20 times more likely to develop schizophrenia than people in whom Grin2a is intact.
Tingting Zhou, a postdoctoral researcher in Feng’s lab, says the team had to think carefully about how to assess mice for schizophrenia-like symptoms. You can’t ask mice about hallucinations or delusions. Instead, Zhou designed an experiment that tested how well mice use new information to update their beliefs about the world—a process that is thought to be impaired in people who experience delusions.
To illustrate how failure to update beliefs can skew someone’s ideas about reality, Zhou describes a situation in which a person watches a stranger reach for something in their pocket, fearing that person intends to harm them. Then, the stranger’s hand emerges with a lollipop. The new information should alleviate concern—but a person with schizophrenia might hold on to their original belief, convinced the lollipop-holding stranger is a threat.
In Zhou’s experiments testing animals’ belief-updating abilities, mice had to keep up with changing information to earn as many treats as possible. Those with the Grin2a mutation were slow to adapt when experimenters adjusted the relative values of their choices. “Once the animal learns something, it’s very hard for them to update the information,” Zhou explains.
Zhou and Feng linked this behavioral difference to abnormally low activity in a part of the brain called the mediodorsal thalamus. The mediodorsal thalamus acts like a switchboard in the brain, routing and coordinating information between different parts of the cortex to support thinking, decision-making, and flexible behavior. Studies with patients have implicated this region in schizophrenia as well, showing that it has fewer cells and is less active in people with the disorder than those without.
The mediodorsal thalamus (pink) is less active in people with schizophrenia and mouse models of the disease. Image: Guoping Feng, Tingting Zhou
Feng’s lab and others are now looking for belief-updating deficits in other genetic models of schizophrenia. “The goal is to look at whether this is a converging mechanism…then you can start to look at what other [brain] regions are involved,” he says.
In mice with Grin2a mutations, the researchers were able to restore normal belief updating by activating neurons in the mediodorsal thalamus, offering hope that manipulating the same circuitry might benefit patients. “It will not be easy,” Feng says, “but at least you have something you can work on. Previously, it was just very hard to imagine how to develop a new therapeutic for schizophrenia.”
Internal noise
It’s not just the genes associated with schizophrenia that differ across affected individuals. The symptoms of the disorder vary, too. People experience some combination of delusions, hallucinations, disorganized speech, and cognitive problems—but none of these are experienced by everyone with the disorder. This heterogeneity complicates the diagnosis, treatment, and study of schizophrenia. For this reason, some researchers are focusing their efforts on understanding its individual symptoms.
Evelina Fedorenko, a McGovern Investigator and associate professor of brain and cognitive sciences, specializes in understanding how the brain processes speech and language. But recently, her group has teamed up with physician-researcher Ann Shinn at McLean Hospital to begin exploring why some people hear voices when no one is speaking.
About three out of four people with schizophrenia experience auditory hallucinations, which most commonly involve voices.
These hallucinations can be distressing, sometimes involving threatening language or commands to cause harm. Some people with mood disorders or post-traumatic stress disorder also hear them.
Tamar Regev was the 2022–2024 Poitras Center Postdoctoral Fellow in Evelina Fedorenko’s lab. Photo: Steph Stevens
To investigate, Tamar Regev, a research scientist in the Fedorenko lab, asked people who experience auditory hallucinations to listen to different kinds of sounds inside an MRI scanner, then compared how their brains responded versus the brains of people without auditory hallucinations. Her study included participants with schizophrenia and bipolar disorder, both with and without a history of auditory hallucinations, as well as healthy controls.
Inside the scanner, participants listened to three kinds of audio: spoken language, gibberish, and gibberish so scrambled that it barely resembled speech. Regev analyzed how these sounds impacted activity in areas the brain uses to process auditory input at different levels: a part of the auditory cortex that is sensitive to all sounds; a higher-level region within the auditory cortex that usually responds to anything that sounds like speech, even if its content is unclear; and the brain’s language-processing network, which is called on to understand the content of speech, as well as written or signed communications.
Regev found that in people with hallucinations, the part of the brain that usually responds only to language responded to meaningless speech as well. “In this pathway from auditory to speech to language processing, the stimuli that should be filtered out somewhere on the way are now passing to higher stations,” she explains. While auditory hallucinations don’t require external sounds, Fedorenko and Regev propose that the brain’s language areas might be similarly activated by “internal noise” in auditory circuits.
Scrambled language
In people who experience auditory hallucinations, the brain’s language regions respond to sounds that aren’t language–including scrambled meaningless gibberish. Below is a sample gibberish clip used in Fedorenko’s study.
Early identification
McGovern scientists have also used brain imaging to investigate what happens in the brain before people develop clear symptoms of schizophrenia. The disorder is usually diagnosed in adolescence or young adulthood, when patients exhibit the first signs of psychosis—but its origins in the brain likely take root years before that.
“One of the things we’re super interested in is, can you identify people at risk early on, before they have a big problem,” says McGovern Investigator John Gabrieli, whose work is also supported by the Poitras Center and the Stelling Family Research Fund. That might give clinicians an opportunity to intervene and lessen or prevent the disorder’s most devastating effects, he says.
Gabrieli and his colleagues have studied the brains of children who, because they have a parent or sibling with schizophrenia, have an elevated risk of developing the disorder themselves. They found that a system called the default mode network (DMN), which is overactive in adults with schizophrenia, is already working overtime when children in this high-risk group are seven- to 12-years-old.
Gabrieli explains that the DMN is active when people are not actively engaged in an activity or thinking about the external world. “It turns on when you think about your family, your values, your hopes for the future, or important events of your life. It’s almost like a system of who /you are,” he says. Hallucinations and delusions experienced by people with schizophrenia may be associated with overactivity in this network.
The default mode network (DMN) is a large-scale brain network that is active when a person is not focused on the outside world and the brain is at wakeful rest. The DMN is often over-engaged in adolescents with depression and anxiety, as well as teens at risk for these and other disorders like schizophrenia (left). DMN activation and connectivity can be “tuned” to a healthier state through the practice of mindfulness (right).
“They’re kind of living in their internal world of beliefs, as opposed to the reality that most of us occupy,” Gabrieli explains.
He and his colleagues think overactivity in the DMN might make people vulnerable to schizophrenia—and their data show this atypical activity can be detected many years before the core symptoms of schizophrenia appear. With further validation, children with hyperactivity of the DMN might be candidates for early intervention.
With new and better interventions, the ability to identify people who may be on a path toward schizophrenia will be even more impactful—underscoring the need for continued research on multiple fronts. A recent gift of $8 million to the Poitras Center from Patricia and James Poitras is helping accelerate this work in labs at the McGovern Institute and beyond.
What if a technology could reanimate parts of the body that have lost their connection to the brain — like a bladder that can no longer empty due to a spinal cord injury, or intestines that can’t push food forward due to Crohn’s disease? What if this technology could also send sensations such as hunger or touch back to the brain?
New MIT research offers a glimpse into this future. In a study published today in Nature Communications, the researchers introduce a novel myoneural actuator (MNA) that reprograms living muscles into fatigue-resistant, computer-controlled motors that can be implanted inside the body to restore movement in organs.
“We’ve built an interface that leverages natural pathways used by the nervous system so that we can seamlessly control organs in the body, while also enabling the transmission of sensory feedback to the brain,” says Hugh Herr, senior author of the study, a professor of Media Arts and Sciences at the MIT Media Lab, co-director of the K. Lisa Yang Center for Bionics, and an associate member of the McGovern Institute for Brain Research at MIT. The study was co-led by Herr’s postdoctoral associate Guillermo Herrera-Arcos and former postdoc Hyungeun Song.
By repurposing existing muscle in the body, the researchers have developed the first “living” implant that uses rewired sensory nerves to revive paralyzed organs — which may present a new genre of medicine where a person’s own tissue becomes the hardware.
Rewiring the brain-body interface
Many scientists have toiled to restore function in paralyzed organs, but it’s extremely challenging to design a technology that both communicates with the nervous system and doesn’t fatigue over time. Some have tried to insert miniaturized actuators — small machines that can power bionic limbs — into the body. However, Herrera-Arcos says “it’s hard to make actuators at the centimeter level and they aren’t very efficient.” Others have focused on creating muscle tissue in the lab, but building muscles cell by cell is time-intensive and far from ready for human use.
Herr’s team tried something different.
“We engineered existing muscles to become an actuator, or motor, that reinstates motion in organs,” says Song.
To do this, the researchers had to navigate the delicate dynamics within the nervous system. The actuator would have to interface with the nervous system to work properly, but it must also somehow evade the brain’s control. “You don’t want the brain to consciously control the muscle actuator because you want the actuator to automatically control an organ, like the heart,” explains Herrera-Arcos. Establishing a computer-controlled muscle to move organs could ensure automatic function and also bypass damaged brain pathways.
Incorporating motor neurons into the actuator may help generate movement, but these neurons are directly controlled by the brain. “Sensory neurons, however, are wired to receive, not to command,” explains Song. “We thought we could leverage this dynamic and reroute motor signals through sensory fibers, making a computer — rather than the brain — the muscle’s new command center.”
To achieve this, sensory nerves would need to fuse fluidly with muscle, and scientists had not yet determined if this was possible. Remarkably, when the team replaced motor nerves in rodent muscle with sensory ones, “the sensory nerves reinnervated the muscles and formed functional synapses. It’s a tremendous discovery,” says Herrera-Arcos.
Sensory neurons not only enabled the use of a digital controller but also helped curb muscle fatigue — increasing fatigue resistance in rodent muscle by 260 percent compared to native muscles. That’s because muscle fatigue depends largely on the diameter of the axons, or cable-like projections that innervate muscles. Motor neuron axons vary greatly in size, and when a motor nerve is electrically stimulated, the largest axons fire first — exhausting the muscle quickly. However, sensory axons are all nearly the same size, so the signal is broadcast more evenly across muscle fibers, avoiding fatigue, explains Herrera-Arcos.
Designing a biohybrid system
They combined all of these elements into a fatigue-resistant biohybrid motor called a myoneural actuator (MNA). By wrapping their actuator around a paralyzed intestine in a rodent, the researchers reinstated the organ’s squeezing motion. They also successfully controlled rodent calf muscles in an experiment designed to mimic residual muscle in human lower-limb amputations. Importantly, the MNA system transmitted sensory signals to the brain. “This suggests that our technology could seamlessly link organs to the brain. For example, we might be able to make a paralyzed stomach relay hunger,” explains Song.
Bringing their MNA to clinic will require further testing in larger animal models, and eventually, humans. But if it passes the regulatory gauntlet, their system could pave a smoother and safer path toward reviving static organs. Implanting MNAs would require a surgery that is already commonplace in clinic, the researchers say, and their system might be simpler and safer to implement than mechanical devices or organ transplants that introduce foreign material into the body.
The team is hopeful that their new technology could improve the lives of millions living with organ dysfunctions. “Today’s solutions are mostly synthetic: pacemakers and other mechanical assist devices. A living muscle actuator implanted alongside a weakened organ would be part of the body itself. That is a category of medicine different from anything seen in clinic,” explains Herrera-Arcos.
Song says that skin is of special interest. “Hypothetically, we could wrap MNAs around skin grafts to relay tactile feedback, such as strain or tension, which is currently missing for users of prostheses.” Their technology could even augment virtual reality systems, too. “The idea is that, if we couple the MNA system to skin and muscles, a person could feel what their virtual avatar is touching even though their real body isn’t moving,” says Song.
“Our research is on the brink of giving new life to various parts and extensions of the body,” adds Herrera-Arcos. “It’s exciting to think that our system could enhance human potential in ways that once only belonged to the realm of science fiction.”
This research was funded in part by the Yang Tan Collective at MIT, K. Lisa Yang Center for Bionics at MIT, Nakos Family Bionics Research Fund at MIT, and the Carl and Ruth Shapiro Foundation.
Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction—and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.
“It is an exciting moment in this ed tech space,” says McGovern investigator John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study—one of few randomized, controlled trials to evaluate educational technology—suggests a nuanced approach is needed as these tools are deployed in the classroom.
“What you can get out of a software package will be great for some people, but not so great for other people. Different people need different levels of support.” – John Gabrieli
Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the U.S. had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study—but it also underscored the urgency of understanding which educational technologies are effective, and for whom.
“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers—the summer slide that affects poor readers and disadvantaged children to a greater extent—would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than ten percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”
So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.
Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.
“The idea is they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”
Pandemic learning
Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors—college students with no educational expertise—learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.
Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.
A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design—with flexibly scheduled testing and tutoring sessions conducted over Zoom—helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoctoral scientist who was a graduate student in Gabrieli’s lab.
Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.
Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.
For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction—further emphasizing that different students have different needs. “I think this carefully-done study is a note of caution about who benefits from what,” Gabrieli says.
The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies—and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning.
One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead patients to struggle with making decisions and, eventually, to lose touch with reality.
MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.
This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in schizophrenia patients.
“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”
Feng and Michael Halassa, an associate professor of psychiatry and neuroscience at Tufts University, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.
McGovern Institute Investigator Guoping Feng (right) and his postdoctoral researcher Tinting Zhou (left) in the lab. Photo: Steph Stevens
Adapting to new information
Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.
Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.
More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.
Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.
In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.
Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in schizophrenia patients. These patients show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.
Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.
“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”
To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.
At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.
As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.
“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”
An impaired circuit
Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.
The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.
The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.
While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.
Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in schizophrenia patients, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.
The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.
When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.
This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.
“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.
Miller, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience Emery Brown, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.
Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.
Destabilizing the brain
Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.
When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.
“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”
In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.
For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.
In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.
This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.
Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.
“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”
The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.
“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”
Monitoring anesthesia
Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patters in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.
For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.
To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.
“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.
Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.
The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.
MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.
This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.
Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.
“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.
The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.
Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.
Modeling attention
Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.
When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.
“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”
This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.
“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.
In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.
To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.
To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.
“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.
Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.
The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.
“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.
Effects of location
Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.
The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.
“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.
These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.
“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.
Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.
The research was funded by the National Institutes of Health.
Today, Stanford University neuroscientist Liqun Luo was announced as the recipient of the 2026 Edward M. Scolnick Prize in Neuroscience by the McGovern Institute for Brain Research at MIT. Luo is the Ann and Bill Swindells Professor in the School of Humanities and Sciences, Professor of Biology, and Professor of Neurobiology by courtesy at Stanford University, and a Howard Hughes Medical Institute Investigator. The McGovern Institute presents the Scolnick Prize annually to recognize outstanding achievements in neuroscience.
“Liqun Luo’s development of first-in-kind genetic tools and detailed, innovative experimentation has succeeded in defining rules that govern how transient cell-cell contacts ultimately establish functional neural circuits in the developing brain,” says McGovern Institute Director Robert Desimone, who is also chair of the selection committee. “Luo’s methodologies for visualizing specific subsets of neurons based on their developmental trajectory or their activity are widely used in the field and have driven the identification of neurons responsible for a range of behaviors, including sleep and social interactions.”
Liqun Luo was born in Shanghai, China and attained his bachelor’s degree in molecular biology from the University of Science and Technology of China in 1986. He moved to the US for graduate studies at Brandeis University with Kalpana White, where he characterized the homolog of the Alzheimer’s amyloid precursor protein in the fruit fly Drosophila. After receiving a PhD in 1992, he moved to the University of California, San Francisco for postdoctoral training with Lily Jan and Yuh-Nung Jan where he published a number of papers about how small GTPase proteins regulate cellular morphology. Luo descends from a line of mentors trained by his scientific hero Seymour Benzer, who is widely known for founding the field of neurogenetics.
In 1996, Luo joined the faculty at Stanford University and established his own research group to focus on the molecular mechanisms of neuronal morphogenesis in the brain. Luo’s laboratory developed groundbreaking techniques—including Mosaic Analysis with a Repressible Cell Marker (MARCM) in fruit flies and Mosaic Analysis with Double Markers (MADM) in mice—that allowed the labeling and genetic manipulation of individual neurons within otherwise normal brains. These innovations gave researchers the ability to image genetically defined and altered neurons as they grow, connect, and change over time. Luo and his colleagues used these tools to reveal how neurons sculpt their branching structures, prune away unnecessary connections, and find the precise partners they need to form functional circuits. His work illuminated the molecular choreography that ensures each neuron wires into the correct network—an essential step in building circuits for sensation, movement, memory, and emotion. Another impactful innovation from Luo’s group, known as TRAP (Targeted Recombination in Active Populations), allows for the genetic tagging of neurons that are active during specific experiences. This technique has helped reveal how neural populations encode thirst, motivation, and long-term memories.
Most recently, Luo and his group have wholly defined the molecular codes that neurons use to recognize their correct partners in the olfactory system of fruit flies. His research demonstrated that a combinatorial pattern of cell-surface proteins precisely guides neurons to connect to one another and form a functional network. His team then succeeded in genetically altering the molecular cues that govern synaptic connections to rewire a neural circuit and produce a predicted change in the fly’s mating behavior.
Colleagues emphasize that Luo’s influence extends far beyond his own discoveries. Many of the molecular principles he has uncovered in simple model organisms have since proven to be conserved across species, underscoring their fundamental importance. His genetic tracing methods have been adopted by laboratories worldwide and applied not only in neuroscience but also in fields such as cancer biology, where tracing cell lineage is critical. He has also trained a generation of neuroscientists who have gone on to lead major research programs of their own, amplifying his impact across the field.
Luo has received numerous honors, including election to the National Academy of Sciences, the NAS Award in the Neurosciences, the Pradel Research Award, and the Society for Neuroscience’s Award for Education in Neuroscience. He has been a Howard Hughes Medical Institute Investigator since 2005. He is also the author of Principles of Neurobiology, a widely used textbook that has been translated into Chinese, Japanese, and Italian.
The Scolnick Prize recognizes discoveries that advance the understanding of the brain and its disorders. Luo’s work exemplifies this mission, providing tools and conceptual frameworks for understanding how neural circuits form and are refined to become functional, and how mutations disrupt these processes. As neuroscience enters an era defined by increasingly precise control over brain circuits, Liqun Luo’s contributions stand as both enabling and visionary.
The McGovern Institute will award the Scolnick Prize to Luo on June 16, 2026. At 4:00 pm he will deliver a lecture titled “Wiring Specificity of Neural Circuits” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.