MIT cognitive scientists reveal why some sentences stand out from others

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

Greta Tuckute, a former graduate student in the Fedorenko lab. Photo: Caitlin Cunningham

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

Musicians’ enhanced attention

In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute who used brain imaging to follow what happens when people try to focus their attention on certain sounds.

When Cassia Low Manting, a postdoctoral researcher working in the labs of McGovern Institute Investigators John Gabrieli and Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions. “This points to the idea that we can train this selective attention ability,” Manting says.

The research team, including senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their findings September 17, 2025, in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.

Overcoming challenges

Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”

Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.

Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower pitch sound and the 43-Hertz activity corresponds specifically to the higher pitch sound,” Manting explains. “It is very clean and very clear.”

When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher pitched or the lower pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.

Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.

To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune—even, in some cases, when the notes of the distracting tune played at the exact same time.

Top-down vs bottom-up attention

What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus—the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention—but more so in some people than in others.

“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.

Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.

She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.

3 Questions: On humanizing scientists

Alan Lightman has spent much of his authorial career writing about scientific discovery, the boundaries of knowledge, and remarkable findings from the world of research. His latest book “The Shape of Wonder,” co-authored with the lauded English astrophysicist Martin Rees and published this month by Penguin Random House, offers both profiles of scientists and an examination of scientific methods, humanizing researchers and making an affirmative case for the value of their work. Lightman is a professor of the practice of the humanities in MIT’s Comparative Media Studies/Writing Program; Rees is a fellow of Trinity College at Cambridge University and the UK’s Astronomer Royal. Lightman talked with MIT News about the new volume.

Q: What is your new book about?

A: The book tries to show who scientists are and how they think. Martin and I wrote it to address several problems. One is mistrust in scientists and their institutions, which is a worldwide problem. We saw this problem illustrated during the pandemic. That mistrust I think is associated with a belief by some people that scientists and their institutions are part of the elite establishment, a belief that is one feature of the populist movement worldwide. In recent years there’s been considerable misinformation about science. And, many people don’t know who scientists are.

Another thing, which is very important, is a lack of understanding about evidence-based critical thinking. When scientists get new data and information, their theories and recommendations change. But this process, part of the scientific method, is not well-understood outside of science. Those are issues we address in the book. We have profiles of a number of scientists and show them as real people, most of whom work for the benefit of society or out of intellectual curiosity, rather than being driven by political or financial interests. We try to humanize scientists while showing how they think.

Q: You profile some well-known figures in the book, as well as some lesser-known scientists. Who are some of the people you feature in it?

A: One person is a young neuroscientist, Lace Riggs, who works at the McGovern Institute for Brain Research at MIT. She grew up in difficult circumstances in southern California, decided to go into science, got a PhD in neuroscience, and works as a postdoc researching the effect of different compounds on the brain and how that might lead to drugs to combat certain mental illnesses. Another very interesting person is Magdalena Lenda, an ecologist in Poland. When she was growing up, her father sold fish for a living, and took her out in the countryside and would identify plants, which got her interested in ecology. She works on stopping invasive species. The intention is to talk about people’s lives and interests, and show them as full people.

While humanizing scientists in the book, we show how critical thinking works in science. By the way, critical thinking is not owned by scientists. Accountants, doctors, and many others use critical thinking. I’ve talked to my car mechanic about what kinds of problems come into the shop. People don’t know what causes the check engine light to go on — the catalytic converter, corroded spark plugs, etc. — so mechanics often start from the simplest and cheapest possibilities and go to the next potential problem, down the list. That’s a perfect example of critical thinking. In science, it is checking your ideas and hypotheses against data, then updating them if needed.

Q: Are there common threads linking together the many scientists you feature in the book?

A: There are common threads, but also no single scientific stereotype. There’s a wide range of personalities in the sciences. But one common thread is that all the scientists I know are passionate about what they’re doing. They’re working for the benefit of society, and out of sheer intellectual curiosity. That links all the people in the book, as well as other scientists I’ve known. I wish more people in America would realize this: Scientists are working for their overall benefit. Science is a great success story. Thanks to scientific advances, since 1900 the expected lifespan in the U.S, has increased from a little more than 45 years to almost 80 years, in just a century, largely due to our ability to combat diseases. What’s more vital than your lifespan?

This book is just a drop in the bucket in terms of what needs to be done. But we all do what we can.

International neuroscience collaboration unveils comprehensive cellular-resolution map of brain activity

The first comprehensive map of mouse brain activity has been unveiled by a large international collaboration of neuroscientists. Researchers from the International Brain Laboratory (IBL), including McGovern Investigator Ila Fiete, published their findings today in two papers in Nature, revealing insights into how decision-making unfolds across the entire brain in mice at single-cell resolution. This brain-wide activity map challenges the traditional hierarchical view of information processing in the brain and shows that decision-making is distributed across many regions in a highly coordinated way.

“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making,” explains Co-Founder of IBL Alexandre Pouget. “The scale is unprecedented as we recorded from over half a million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95% of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” adds Pouget, who is also a Group Leader at the University of Geneva.

Brain-wide map showing 75,000 analyzed neurons lighting up during different stages of decision-making. At the beginning of the trial, the activity is quiet. Then it builds up in the visual areas at the back of the brain, followed by a rise in activity spreading across the brain as evidence accumulates towards a decision. Next, motor areas light up as there is movement onset and finally there is a spike in activity everywhere in the brain as the animal is rewarded.

Modeling decision-making

The brain map was made possible by a major international collaboration of neuroscientists from multiple universities, including MIT. Researchers across 12 labs used state-of-the-art silicon electrodes, called Neuropixels probes,  for simultaneous neural recordings to measure brain activity while mice were carrying out a decision-making task.

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

“Participating in the International Brain Laboratory has added new ways for our group to contribute to science,” says Fiete, who is also a professor of brain and cognitive sciences director of the K. Lisa Yang ICoN Center at MIT. “Our lab has helped standardize methods to analyze and generate robust conclusions from data. As computational neuroscientists interested in building models of how the brain works, access to brainwide recordings is incredible: the traditional approach of recording from one or a few brain areas limited our ability to build and test theories, resulting in fragmented models. Now we have the delightful but formidable task to make sense of how all parts of the brain coordinate to perform a behavior. Surprisingly, having a full view of the brain leads to simplifications in the models of decision making.”

The labs collected data from mice performing a decision-making task with sensory, motor, and cognitive components. In the task, a mouse sits in front of a screen and a light appears on the left or right side. If the mouse then responds by moving a small wheel in the correct direction, it receives a reward.

In some trials, the light is so faint that the animal must guess which way to turn the wheel, for which it can use prior knowledge: the light tends to appear more frequently on one side for a number of trials, before the high-frequency side switches. Well-trained mice learn to use this information to help them make correct guesses. These challenging trials therefore allowed the researchers to study how prior expectations influence perception and decision-making.

Brain-wide results

The first paper, “A brain-wide map of neural activity during complex behaviour,” showed that decision-making signals are surprisingly distributed across the brain, not localized to specific regions. This adds brain-wide evidence to a growing number of studies that challenge the traditional hierarchical model of brain function and emphasizes that there is constant communication across brain areas during decision-making, movement onset, and even reward. This means that neuroscientists will need to take a more holistic, brain-wide approach when studying complex behaviors in future.

Flat maps of the mouse brain showing which areas have significant changes in activity during each of three task intervals. Credit: Michael Schartner & International Brain Laboratory

“The unprecedented breadth of our recordings pulls back the curtain on how the entire brain performs the whole arc of sensory processing, cognitive decision-making, and movement generation,” says Fiete. “Structuring a collaboration that collects a large standardized dataset which single labs could not assemble is a revolutionary new direction for systems neuroscience, initiating the field into the hyper-collaborative mode that has contributed to leaps forward in particle physics and human genetics. Beyond our own conclusions, the dataset and associated technologies, which were released much earlier as part of the IBL mission, have already become a massively used resource for the entire neuroscience community.”

The second paper, “Brain-wide representations of prior information,” showed that prior expectations, our beliefs about what is likely to happen based on our recent experience, are encoded throughout the brain. Surprisingly, these expectations are not only found in cognitive areas, but also brain areas that process sensory information and control actions. For example, expectations are even encoded in early sensory areas such as the thalamus, the brain’s first relay for visual input from the eye. This supports the view that the brain acts as a prediction machine, but with expectations encoded across multiple brain structures playing a central role in guiding behavior responses. These findings could have implications for understanding conditions such as schizophrenia and autism, which are thought to be caused by differences in the way expectations are updated in the brain.

“Much remains to be unpacked: if it is possible to find a signal in a brain area, does it mean that this area is generating the signal, or simply reflecting a signal generated somewhere else? How strongly is our perception of the world is shaped by our expectations? Now we can generate some quantitative answers and begin the next phase experiments to learn about the origins of the expectation signals by intervening to modulate their activity,” says Fiete.

Looking ahead, the team at IBL plan to expand beyond their initial focus on decision-making to explore a broader range of neuroscience questions. With renewed funding in hand, IBL aims to expand its research scope and continue to support large-scale, standardized experiments.

New model of collaborative neuroscience

Officially launched in 2017, IBL introduced a new model of collaboration in neuroscience that uses a standardized set of tools and data processing pipelines shared across multiple labs, enabling the collection of massive datasets while ensuring data alignment and reproducibility. This approach to democratize and accelerate science draws inspiration from large-scale collaborations in physics and biology, such as CERN and the Human Genome Project.

All data from these studies, along with detailed specifications of the tools and protocols used for data collection, are openly accessible to the global scientific community for further analysis and research. Summaries of these resources can be viewed and downloaded on the IBL website under the sections: Data, Tools, Protocols.

This research was supported by grants from Wellcome (209558 and 216324), the Simons Foundation, The National Institutes of Health (NIH U19NS12371601), the National Science Foundation (NSF 1707398), the Gatsby Charitable Foundation (GAT3708), andby the Max Planck Society and the Humboldt Foundation.

 

Searching for self

This story also appears in the Fall 2025 issue of BrainScan

___

The question of how we know ourselves might seem the subject of philosophers, but it is just as much a matter of biology. As modern neuroscientists obtain an increasingly sophisticated understanding of how the brain generates emotions, responds to the external world, and learns from experience, some researchers are returning to a central question: How do we know our experiences, emotions, and physical sensations belong to us?

Curiosity about how the brain generates our sense of self has been a driving force for the research of McGovern Investigator Fan Wang. Following that curiosity has drawn Wang into diverse studies, exploring the origins of pain and the mechanisms we use to control our movements.

“We cannot pinpoint a set of active neurons and say that’s the sense of self. That still remains a mystery,” says Wang, who is also a professor of brain and cognitive sciences and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. But she and other neuroscientists are drilling down into different functions of the brain that together might generate our awareness of ourselves.

Woman wearing blue blazer smiles and gestures off camera with man in white lab coat seated next to her.
McGovern Investigator Fan Wang (right) with research scientist Vincent Prevosto, who studies brain regions implicated in whisker movement. Photo: Steph Stevens

Wang, who teaches the undergraduate course, “Neurobiology of Self,” explains that there are lots of ways to think about our sense of self, which are probably deeply integrated in the brain. Some are mostly about our physical bodies: How do we experience touch? How do we understand
where we are in space, or recognize the boundary between ourselves and rest of the world? Some consider more internal sensations, like how we experience pain or hunger. Emotion is also key to our sense of self: How do we know that anger or joy are our own, and why do these states change the way our bodies feel?

Wang can trace her initial interest in the brain’s sense of self to work she did as a graduate student in Richard Axel’s lab at Columbia University. The lab had identified receptors expressed by sensory neurons in the nose that detect odorous substances. Wang and others discovered the pathways that information about these smells takes to the brain, and how the brain distinguishes one smell from another.

Who is the “knower” of this information? “The answer,” Wang says, “is ‘I’ or ‘me.’ But understanding where I get the sense of self and how that is constructed, is what drives me to do neuroscience.”

Mechanisms of movement

In her lab at the McGovern Institute, Wang is studying how the brain controls the body’s movements, which she sees as closely tied to the awareness of our physical selves. “The reason I think I am in my body is because I can control my movement. I generate the movement. I cannot control your movement,” says Wang. “Volitional movement gives us a sense of agency, and this sense of agency resembles the sense of self.” For the mice that the group studies, one crucial type of movement comes from the whiskers, which the animals depend on as they explore their environments. Wang’s group has traced the neural circuity that controls whiskers’ rhythmic back-and-forth, which is initiated in the brainstem, where many of the body’s most vital functions are controlled. Wang describes the simple circuit as an oscillator, or a self-generated loop.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

Once it’s started, “the movement can go on unless some other signals stop it,” she says. The movement the circuit generates is simple but voluntary, and can be fine-tuned based on the sensory feedback the whiskers relay back to the brain. They’ve also been investigating how mice move the larynx to generate the squeaks and calls they use to communicate. These intentional movements must be coordinated with the ongoing cycles of respiration since we produce normal sounds only during expiration. Wang’s team has found neurons in the brainstem that generate vocalization-specific movements, and also discovered how respiration-controlling neural circuits can override them, ensuring that breathing is prioritized.

Wang says understanding the circuitry that controls these simple movements sets the stage for figuring out how the brain modifies activity in those circuits to create more complex, intentional movements. “That brings me closer to understanding where this volition is generated — and closer to this sense of self,” she says.

Emotional pain

Still, she knows that volitional movements — even those generated in response to perceptions of the environment — do not, on their own, define a sense of self. As a counterexample, she looks to self-driving cars: “There’s sensory information coming into the central computer, which then generates a motor output — where to drive, where to turn, where to stop. But none of us think a Waymo taxi has a sense of self.”

Wang says when she pondered the ways in which AI-powered cars lack a sense of self, she began thinking about emotions and pain. “If the self-driving Waymo crashes, it will not feel pain,” she says. “But if we hurt ourselves, we will feel pain. And we will hate that, and then we’ll learn.” So her lab is also exploring how the nervous system generates pain perception, including the emotional response that it evokes.

Ensembles of neurons in the amygdala activated by general anesthesia. Image: Fan Wang

In both humans and mice, pain causes emotional suffering that can be recognized and measured through changes in body functions like heart rate and blood pressure. With funding from the K. Lisa Yang Brain-Body Center at MIT, Wang’s lab is carefully tracking these involuntary, or autonomic, functions to gain a more complete understanding of pain’s emotional impact. This approach has helped clarify the role of pain-suppressing neurons in the brain’s amygdala — an important emotion-processing center — that Wang’s team discovered in 2020. When researchers selectively activate those cells in mice, the animals’ behavior makes it clear that the neurons are suppressing pain. Now, the group has learned that activating these neurons suppresses the autonomic response to pain.

Wang says there’s hope that modulating pain’s emotional response might be a way to treat chronic pain in patients. She explains that some patients with damage to another one of the brain’s emotional centers, the cingulate cortex, feel painful stimuli, but experience them as merely intense sensations. That suggests that it might be possible to modulate the emotional response to pain to eliminate patients’ suffering, without blocking the protective information that pain can provide.

The team has also been focusing on another set of anesthesia-activated neurons, which they have found suppress anxiety. When anxiety-suppressing neurons are activated in mice, the animals’ heart rates slow and they become more willing to explore bright, open spaces. Another anxiety-associated measure — heart rate variability — increases. Wang explains that this change is particularly significant: “If you have persistent low heart rate variability, especially in veterans, that is a very good predictor for anxiety developing into depression in the future,” she says.

The team’s findings, which suggest that changes in autonomic functions may themselves relieve anxiety, point toward potential new targets for anti-anxiety therapies. And by highlighting the connection between emotion and bodily responses, they offer more clues about our sense of self. “These neurons are now changing some high-level concept about anxiety,” Wang points out.

That link between emotion and body seems to Wang to be key to the sense of self. The big questions remain unanswered, but that simply stokes her curiosity. “I can be aware of my bodily responses: I am aware of ‘I am anxious’ or ‘I am in pain.’ I can see the pathways from which stimuli go into these nervous systems and come back down to the body and control the response. But I still don’t know who is the person — the knower,” she says. “I haven’t found it, so I’m going to keep looking.”

New gift expands mental illness studies at Poitras Center for Psychiatric Disorders Research

One in every eight people—970 million globally—live with mental illness, according to the World Health Organization, with depression and anxiety being the most common mental health conditions worldwide. Existing therapies for complex psychiatric disorders like depression, anxiety, and schizophrenia have limitations, and federal funding to address these shortcomings is growing increasingly uncertain.

Jim and Pat Poitras
James and Patricia Poitras at an event co-hosted by the McGovern Institute and Autism Speaks. Photo: Justin Knight

Patricia and James Poitras ’63 have committed $8 million to the Poitras Center for Psychiatric Disorders Research to launch pioneering research initiatives aimed at uncovering the brain basis of major mental illness and accelerating the development of novel treatments.

“Federal funding rarely supports the kind of bold, early-stage research that has the potential to transform our understanding of psychiatric illness. Pat and I want to help fill that gap—giving researchers the freedom to follow their most promising leads, even when the path forward isn’t guaranteed,” says James Poitras, who is chair of the McGovern Institute Board.

Their latest gift builds upon their legacy of philanthropic support for psychiatric disorders research at MIT, which now exceeds $46 million.

“With deep gratitude for Jim and Pat’s visionary support, we are eager to launch a bold set of studies aimed at unraveling the neural and cognitive underpinnings of major mental illnesses,” says Robert Desimone, director of the McGovern Institute, home to the Poitras Center. “Together, these projects represent a powerful step toward transforming how we understand and treat mental illness.”

A legacy of support

Soon after joining the McGovern Institute Leadership Board in 2006, the Poitrases made a $20 million commitment to establish the Poitras Center for Psychiatric Disorders Research at MIT. The center’s goal, to improve human health by addressing the root causes of complex psychiatric disorders, is deeply personal to them both.

“We had decided many years ago that our philanthropic efforts would be directed towards psychiatric research. We could not have imagined then that this perfect synergy between research at MIT’s McGovern Institute and our own philanthropic goals would develop,” recalls Patricia.

The center supports research at the McGovern Institute and collaborative projects with institutions such as the Broad Institute, McLean Hospital, Mass General Brigham and other clinical research centers. Since its establishment in 2007, the center has enabled advances in psychiatric research including the development of a machine learning “risk calculator” for bipolar disorder, the use of brain imaging to predict treatment outcomes for anxiety, and studies demonstrating that mindfulness can improve mental health in adolescents.

A scientist speaks at a podium with an image of DNA on the wall behind him.
Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT, delivers a lecture at the Poitras Center’s 10th anniversary celebration in 2017. Photo: Justin Knight

For the past decade, the Poitrases have also fueled breakthroughs in McGovern Investigator Feng Zhang’s lab, backing the invention of powerful CRISPR systems and other molecular tools that are transforming biology and medicine. Their support has enabled the Zhang team to engineer new delivery vehicles for gene therapy, including vehicles capable of carrying genetic payloads that were once out of reach. The lab has also advanced innovative RNA-guided gene engineering tools such as NovaIscB, published in Nature Biotechnology in May 2025. These revolutionary genome editing and delivery technologies hold promise for the next generation of therapies needed for serious psychiatric illness.

In addition to fueling research in the center, the Poitras family has gifted two endowed professorships—the James and Patricia Poitras Professor of Neuroscience at MIT, currently held by Feng Zhang, and the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT, held by Guoping Feng—and an annual postdoctoral fellowship at the McGovern Institute.

New initiatives at the Poitras Center

The Poitras family’s latest commitment to the Poitras Center will launch an ambitious set of new projects that bring together neuroscientists, clinicians, and computational experts to probe underpinnings of complex psychiatric disorders including schizophrenia, anxiety, and depression. These efforts reflect the center’s core mission: to speed scientific discovery and therapeutic innovation in the field of psychiatric brain disorders research.

McGovern cognitive neuroscientists Evelina Fedorenko PhD ‘07 and Nancy Kanwisher ’80, PhD ’86, the Walter A. Rosenblith Professor of Cognitive Neuroscience—in collaboration with psychiatrist Ann Shinn of McLean Hospital—will explore how altered inner speech and reasoning contribute to the symptoms of schizophrenia. They will collect functional MRI data from individuals diagnosed with schizophrenia and matched controls as they perform reasoning tasks. The goal is to identify the brain activity patterns that underlie impaired reasoning in schizophrenia, a core cognitive disruption in the disorder.

Three women wearing name tags smile for hte camera.
Patricia Poitras (center) with McGovern Investigators Nancy Kanwisher ’80, PhD ’86 (left) and Martha Constantine-Paton (right) at the Poitras Center’s 10th anniversary celebration in 2017. Photo: Justin Knight

A complementary line of investigation will focus on the role of inner speech—the “voice in our head” that shapes thought and self-awareness. The team will conduct a large-scale online behavioral study of neurotypical individuals to analyze how inner speech characteristics correlate with schizophrenia-spectrum traits. This will be followed by neuroimaging work comparing brain architecture among individuals with strong or weak inner voices and people with schizophrenia, with the aim of discovering neural markers linked to self-talk and disrupted cognition.

A different project led by McGovern neuroscientist Mark Harnett and 2024–2026 Poitras Center Postdoctoral Fellow Cynthia Rais focuses on how ketamine—an increasingly used antidepressant—alters brain circuits to produce rapid and sustained improvements in mood. Despite its clinical success, ketamine’s mechanisms of action remain poorly understood. The Harnett lab is using sophisticated tools to track how ketamine affects synaptic communication and large-scale brain network dynamics, particularly in models of treatment-resistant depression. By mapping these changes at both the cellular and systems levels, the team hopes to reveal how ketamine lifts mood so quickly—and inform the development of safer, longer-lasting antidepressants.

Guoping Feng is leveraging a new animal model of depression to uncover the brain circuits that drive major depressive disorder. The new animal model provides a powerful system for studying the intricacies of mood regulation. Feng’s team is using state-of-the-art molecular tools to identify the specific genes and cell types involved in this circuit, with the goal of developing targeted treatments that can fine-tune these emotional pathways.

“This is one of the most promising models we have for understanding depression at a mechanistic level,” says Feng, who is also associate director of the McGovern Institute. “It gives us a clear target for future therapies.”

Another novel approach to treating mood disorders comes from the lab of James DiCarlo, the Peter de Florez Professor of Neuroscience at MIT, who is exploring the brain’s visual-emotional interface as a therapeutic tool for anxiety. The amygdala, a key emotional center in the brain, is heavily influenced by visual input. DiCarlo’s lab is using advanced computational models to design visual scenes that may subtly shift emotional processing in the brain—essentially using sight to regulate mood. Unlike traditional therapies, this strategy could offer a noninvasive, drug-free option for individuals suffering from anxiety.

Together, these projects exemplify the kind of interdisciplinary, high-impact research that the Poitras Center was established to support.

“Mental illness affects not just individuals, but entire families who often struggle in silence and uncertainty,” adds Patricia. “Our hope is that Poitras Center scientists will continue to make important advancements and spark novel treatments for complex mental health disorders and most of all, give families living with these conditions a renewed sense of hope for the future.”

Learning from punishment

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent—but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported August 4 in the journal PNAS, explains how a single punishment can send different messages to different people and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

Modeling punishment

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern Investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts—everybody knows what action happened, who punished it, and what they did to punish it—different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations—but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Woman in red sweater smiling to camera
Graduate student Setayesh Radkani uses tools from psychology, cognitive neuroscience and machine learning to understand the social and moral mind. Photo: Caitlin Cunningham

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes—assessed through a standard survey—tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. “You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

This study was funded, in part, by the Patrick J McGovern Foundation.

How the brain distinguishes oozing fluids from solid objects

Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

Portrait of smiling woman wearing a grey sweater.
McGovern Institute postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison in the fall of 2025, is the lead author of the “things vs. stuff” paper, which appears today in the journal Current Biology. Photo: Steph Stevens

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.

 

Adolescents’ willingness to explore is shaped by socioeconomic status

Exploration is essential to learning—and a new study from scientists at MIT’s McGovern Institute suggests that students may be less willing to explore if they come from a low socioeconomic environment. The study, which focused on adolescents and was published July 9, 2025, in the journal Nature Communications, shows how differences in learning strategies might contribute to socioeconomic-related disparities in academic achievement.

Students with low socioeconomic status (SES)—a measure that takes into account parents’ income levels and educational attainment—tend to lag behind their higher-SES peers academically. Limited resources at home can restrict access to educational tools and experiences, likely contributing to these disparities. But the new study, led by McGovern Institute Investigator John Gabrieli, shows that students from low-SES backgrounds may approach learning differently, too.

“We often think about external factors when we think about socioeconomic differences in learning, but kids’ mindsets and internal factors can also play a role,” says Alexandra Decker, a postdoctoral fellow in Gabrieli’s lab who ran the study. Understanding such differences can help educators develop strategies to reduce disparities and help all students succeed.

The value of exploration

Exploration is a vital part of development, particularly during adolescence. By trying new things and testing limits, children begin to find their way in the world, discovering the subjects and experiences that motivate them. That’s important for obtaining new knowledge, both in and out of school. “There’s a lot of research suggesting that exploration is a really important mechanism that children use for learning,” Decker says. “Exploring their environment really broadly and making mistakes helps them get the feedback that they need for learning,” she says.

Because the outcomes of exploration are unknown, this way of interacting with the world involves risk. “If you try something new, the outcome is uncertain, and it could lead to a bad outcome before things get better. You might lose out, at least in the short term. ” Decker says.

At school, students can explore in a variety of ways, such as by asking questions in class or taking on courses in unfamiliar subjects. Both are opportunities to learn something new, though they may seem less safe than sitting quietly and sticking to more comfortable coursework. Decker points out that this kind of exploration might feel particularly risky when students feel they lack the resources to compensate if things don’t go well.

“If you’re in an environment that’s really enriching, you have resources to compensate for challenges that might be accrued through exploring. If you take a new course and you struggle, you can use your resources to get a tutor and overcome these challenges. Your environment can support exploration and its costs,” she says. “But if you’re in an environment where you don’t have resources to compensate for bad outcomes, you might not take that course that could lead to unknown outcomes.”

Risk-benefit analysis

To investigate the relationship between SES and exploration, Gabrieli’s team had students play a computer game in which they earned points for pumping up balloons as much as possible without popping them. The most successful strategy was to explore the limits early on by pumping the first balloons until they popped, thereby learning when to stop with future balloons. A less exploratory approach could keep all the balloons intact, but earn fewer points over the course of the game.

The students who participated in the study were between the ages of 12 and 14 and came from families with a wide range of SES. Those from lower-SES backgrounds were less likely to explore in the balloon pumping task, resulting in lower outcomes in the game. What’s more, the researchers found a relationship between students’ exploration in the game and their real-world academic performance. Those who explored the least in the balloon-popping game had lower grades than students who explored more. For students at lower-SES levels, reduced exploration also correlated to lower scores on standardized tests of academic skills.

The researchers took a closer look at the data to investigate why some students explored more than others in their game. Their analysis indicated that students who were reluctant to explore were more strongly motivated by avoiding losses than students who had pushed the limits as they pumped their balloons.

The finding suggests that potential losses might be particularly distressing to lower-SES students, says Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. Decker adds students from less affluent backgrounds may have found losses to be more consequential than they are for students whose families have more resources, so it makes sense that those students might take greater pains to avoid them.

This is not the first time Gabrieli’s group has found that evidence of differences in the ways students from different socioeconomic backgrounds make decisions. In a brain imaging study published last year, they found that the brains of adolescents from low-SES backgrounds respond less to rewards than the brains of their higher-SES peers. “How you think about the world—in terms of what’s rewarding, risks worth taking or not taking—seems strongly influenced by the environment that you’re growing up in,” he says.

Decker notes that regardless of SES, students in the study were generally more willing to explore when they had experienced more recent successes in the task. This finding, along with what the team learned about how loss aversion curtails exploration, suggest strategies that educators might use to encourage more exploration in the classroom. “Low-stakes opportunities for kids to engage in exploratory risk-taking with positive feedback could go a long way to helping kids feel more comfortable exploring,” Decker says.

 

A bionic knee integrated into tissue can restore natural movement

MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.

Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.

Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.

“A prosthesis that’s tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.

A subject with the osseointegrated mechanoneural prosthesis overcomes an obstacle placed in their walking path by volitionally flexing and extending their phantom knee joint.

Better control

Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.

During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.

Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.

 

 

In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.

In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.

To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.

This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.

The new bionic knee can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis. The new system is directly integrated with the user’s muscle and bone tissue (bottom row right). This enables greater stability and gives the user much more control over the movement of the prosthesis. Image courtesy of the researchers

“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”

In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.

The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.

“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”

A sense of embodiment

In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.

Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.

The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.

“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”

The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.

The research was funded by the Yang Tan Collective and DARPA.