A bionic knee integrated into tissue can restore natural movement

MIT researchers have developed a new bionic knee that can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis.

Unlike prostheses in which the residual limb sits within a socket, the new system is directly integrated with the user’s muscle and bone tissue. This enables greater stability and gives the user much more control over the movement of the prosthesis.

Participants in a small clinical study also reported that the limb felt more like a part of their own body, compared to people who had more traditional above-the-knee amputations.

A subject with the osseointegrated mechanoneural prosthesis overcomes an obstacle placed in their walking path by volitionally flexing and extending their phantom knee joint.

“A prosthesis that’s tissue-integrated — anchored to the bone and directly controlled by the nervous system — is not merely a lifeless, separate device, but rather a system that is carefully integrated into human physiology, offering a greater level of prosthetic embodiment. It’s not simply a tool that the human employs, but rather an integral part of self,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Tony Shu PhD ’24 is the lead author of the paper, which appears today in Science.

Better control

Over the past several years, Herr’s lab has been working on new prostheses that can extract neural information from muscles left behind after an amputation and use that information to help guide a prosthetic limb.

During a traditional amputation, pairs of muscles that take turns stretching and contracting are usually severed, disrupting the normal agonist-antagonist relationship of the muscles. This disruption makes it very difficult for the nervous system to sense the position of a muscle and how fast it’s contracting.

Using the new surgical approach developed by Herr and his colleagues, known as agonist-antagonist myoneuronal interface (AMI), muscle pairs are reconnected during surgery so that they still dynamically communicate with each other within the residual limb. This sensory feedback helps the wearer of the prosthesis to decide how to move the limb, and also generates electrical signals that can be used to control the prosthetic limb.

 

 

In a 2024 study, the researchers showed that people with amputations below the knee who received the AMI surgery were able to walk faster and navigate around obstacles much more naturally than people with traditional below-the-knee amputations.

In the new study, the researchers extended the approach to better serve people with amputations above the knee. They wanted to create a system that could not only read out signals from the muscles using AMI but also be integrated into the bone, offering more stability and better sensory feedback.

To achieve that, the researchers developed a procedure to insert a titanium rod into the residual femur bone at the amputation site. This implant allows for better mechanical control and load bearing than a traditional prosthesis. Additionally, the implant contains 16 wires that collect information from electrodes located on the AMI muscles inside the body, which enables more accurate transduction of the signals coming from the muscles.

This bone-integrated system, known as e-OPRA, transmits AMI signals to a new robotic controller developed specifically for this study. The controller uses this information to calculate the torque necessary to move the prosthesis the way that the user wants it to move.

The new bionic knee can help people with above-the-knee amputations walk faster, climb stairs, and avoid obstacles more easily than they could with a traditional prosthesis. The new system is directly integrated with the user’s muscle and bone tissue (bottom row right). This enables greater stability and gives the user much more control over the movement of the prosthesis. Image courtesy of the researchers

“All parts work together to better get information into and out of the body and better interface mechanically with the device,” Shu says. “We’re directly loading the skeleton, which is the part of the body that’s supposed to be loaded, as opposed to using sockets, which is uncomfortable and can lead to frequent skin infections.”

In this study, two subjects received the combined AMI and e-OPRA system, known as an osseointegrated mechanoneural prosthesis (OMP). These users were compared with eight who had the AMI surgery but not the e-OPRA implant, and seven users who had neither AMI nor e-OPRA. All subjects took a turn at using an experimental powered knee prosthesis developed by the lab.

The researchers measured the participants’ ability to perform several types of tasks, including bending the knee to a specified angle, climbing stairs, and stepping over obstacles. In most of these tasks, users with the OMP system performed better than the subjects who had the AMI surgery but not the e-OPRA implant, and much better than users of traditional prostheses.

“This paper represents the fulfillment of a vision that the scientific community has had for a long time — the implementation and demonstration of a fully physiologically integrated, volitionally controlled robotic leg,” says Michael Goldfarb, a professor of mechanical engineering and director of the Center for Intelligent Mechatronics at Vanderbilt University, who was not involved in the research. “This is really difficult work, and the authors deserve tremendous credit for their efforts in realizing such a challenging goal.”

A sense of embodiment

In addition to testing gait and other movements, the researchers also asked questions designed to evaluate participants’ sense of embodiment — that is, to what extent their prosthetic limb felt like a part of their own body.

Questions included whether the patients felt as if they had two legs, if they felt as if the prosthesis was part of their body, and if they felt in control of the prosthesis. Each question was designed to evaluate the participants’ feelings of agency, ownership of device, and body representation.

The researchers found that as the study went on, the two participants with the OMP showed much greater increases in their feelings of agency and ownership than the other subjects.

“Another reason this paper is significant is that it looks into these embodiment questions and it shows large improvements in that sensation of embodiment,” Herr says. “No matter how sophisticated you make the AI systems of a robotic prosthesis, it’s still going to feel like a tool to the user, like an external device. But with this tissue-integrated approach, when you ask the human user what is their body, the more it’s integrated, the more they’re going to say the prosthesis is actually part of self.”

The AMI procedure is now done routinely on patients with below-the-knee amputations at Brigham and Women’s Hospital, and Herr expects it will soon become the standard for above-the-knee amputations as well. The combined OMP system will need larger clinical trials to receive FDA approval for commercial use, which Herr expects may take about five years.

The research was funded by the Yang Tan Collective and DARPA.

MIT’s McGovern Institute and Department of Brain and Cognitive Sciences welcome new faculty member Sven Dorkenwald

The McGovern Institute and the Department of Brain and Cognitive Sciences are pleased to announce the appointment of Sven Dorkenwald as an assistant professor starting in January 2026. A trailblazer in the field of computational neuroscience, Dorkenwald is recognized for his leadership in connectomics—an emerging discipline focused on reconstructing and analyzing neural circuitry at unprecedented scale and detail. 

“We are thrilled to welcome Sven to MIT” says McGovern Institute Director Robert Desimone. “He brings visionary science and a collaborative spirit to a rapidly advancing area of brain and cognitive sciences and his appointment strengthens MIT’s position at the forefront of brain research.” 

Dorkenwald’s research is driven by a bold vision: to develop and apply cutting-edge computational methods that reveal how brain circuits are organized and how they give rise to complex computations. His innovative work has led to transformative advances in the reconstruction of connectomes (detailed neural maps) from nanometer-scale electron microscopy images. He has championed open team science and data sharing and played a central role in producing the first connectome of an entire fruit fly brain—a groundbreaking achievement that is reshaping our understanding of sensory processing and brain circuit function. 

Sven is a rising leader in computational neuroscience who has already made significant contributions toward advancing our understanding of the brain,” says Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Neuroscience, and Department Head of Brain and Cognitive Sciences. “He brings a combination of technical expertise, a collaborative mindset, and a strong commitment to open science that will be invaluable to our department. I’m pleased to welcome him to our community and look forward to the impact he will have.” 

Dorkenwald earned his BS in physics in 2014 and MS in computer engineering in 2017 from the University of Heidelberg, Germany. He began his research in connectomics as an undergraduate in the group of Winfried Denk at the Max Planck Institute for Medical Research and Max Planck Institute of Neurobiology.  Dorkenwald went on to complete his PhD at Princeton University in 2023, where he studied both computer science and neuroscience under the mentorship of Sebastian Seung and Mala Murthy. 

All 139,255 neurons in the brain of an adult fruit fly reconstructed by the FlyWire Consortium, with each neuron uniquely color-coded. Render by Tyler Sloan. Image: Sven Dorkenwald

As a PhD student at Princeton, Dorkenwald spearheaded the FlyWire Consortium, a group of more than 200 scientists, gamers, and proofreaders who combined their skills to create the fruit fly connectome. More than 20 million scientific images of the adult fruit fly brain  were added to an AI model that traced each neuron and synapse in exquisite detail. Members of the consortium then checked the results produced by the AI model and pieced them together into a complete, three-dimensional map. With over 140,000 neurons, it is the most complex brain completely mapped to date. The findings were published in a special issue of Nature in 2024. 

Dorkenwald’s work also played a key role in the MICrONS’ consortium effort to reconstruct a cubic millimeter connectome of the mouse visual cortex. Within the MICrONS effort, he co-lead the development of the software infrastructure, CAVE, that enables scientists to collaboratively edit and analyze large connectomics datasets, including FlyWire’s. The findings of the MICrONS consortium were published in a special issue of Nature in 2025. 

Dorkenwald is currently a Shanahan Fellow at the Allen Institute and the University of Washington. He also serves as a visiting faculty researcher at Google Research, where he has been developing machine learning approaches for the annotation of cell reconstructions as part of the Neuromancer team led by Viren Jain.  

As an investigator at the McGovern Institute and an assistant professor in the department of brain and cognitive sciences at MIT, Dorkenwald  plans to develop computational approaches to overcome challenges in scaling connectomics to whole mammalian brains with the goal of advancing our mechanistic understanding of neuronal circuits and analyzing how they compare across regions and species. 

 

Feng Zhang elected to EMBO membership

The European Molecular Biology Organization (EMBO), a professional non-profit organization dedicated to promoting international research in life sciences, announced its new members today. Among the 69 new members recognized for their outstanding achievements is Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and an investigator at the McGovern Institute.

Zhang, who is also a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and a Howard Hughes Medical Institute investigator, is a molecular biologist focused on improving human health. He played an integral role in pioneering the use of CRISPR-Cas9 for genome editing in human cells, including working with Stuart Orkin to develop Casgevy, the first CRISPR-based therapeutic approved for clinical use. His team is currently discovering new ways to modify cellular function and activity—including the restoration of diseased, stressed, or aged cells to a more healthful state.

Zhang has been elected to EMBO as an associate member, where he joins a community of more than 2,100 international life scientists that have demonstrated research excellence in their fields.

“A major strength of EMBO lies in the excellence and dedication of its members,” says EMBO Director Fiona Watt. “Science thrives on global collaboration, and the annual election of the new EMBO members and associate members brings fresh energy and inspiration to our community. We are honoured to welcome this remarkable group of scientists to the EMBO Membership. Their ideas and contributions will enrich the organization and help advance the life sciences internationally.”

The 60 new EMBO members in 2025 are based in 18 member states of the EMBC, the intergovernmental organization that funds the main EMBO programs and activities. The nine new EMBO associate members, including Zhang, are based in six countries outside Europe. In total, 29 (42%) of the new members are women and 40 (58%) are men.

The new members will be formally welcomed at the next EMBO Members’ Meeting in Heidelberg, Germany, on 22-24 October 2025.

Researchers present bold ideas for AI at MIT Generative AI Impact Consortium kickoff event

Launched in February of this year, the MIT Generative AI Impact Consortium (MGAIC), a presidential initiative led by MIT’s Office of Innovation and Strategy and administered by the MIT Stephen A. Schwarzman College of Computing, issued a call for proposals, inviting researchers from across MIT to submit ideas for innovative projects studying high-impact uses of generative AI models.

The call received 180 submissions from nearly 250 faculty members, spanning all of MIT’s five schools and the college. The overwhelming response across the Institute exemplifies the growing interest in AI and follows in the wake of MIT’s Generative AI Week and call for impact papers. Fifty-five proposals were selected for MGAIC’s inaugural seed grants, with several more selected to be funded by the consortium’s founding company members.

Over 30 funding recipients presented their proposals to the greater MIT community at a kickoff event on May 13. Anantha P. Chandrakasan, chief innovation and strategy officer and dean of the School of Engineering who is head of the consortium, welcomed the attendees and thanked the consortium’s founding industry members.

“The amazing response to our call for proposals is an incredible testament to the energy and creativity that MGAIC has sparked at MIT. We are especially grateful to our founding members, whose support and vision helped bring this endeavor to life,” adds Chandrakasan. “One of the things that has been most remarkable about MGAIC is that this is a truly cross-Institute initiative. Deans from all five schools and the college collaborated in shaping and implementing it.”

Vivek F. Farias, the Patrick J. McGovern (1959) Professor at the MIT Sloan School of Management and co-faculty director of the consortium with Tim Kraska, associate professor of electrical engineering and computer science in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), emceed the afternoon of five-minute lightning presentations.

Presentation highlights include:

“AI-Driven Tutors and Open Datasets for Early Literacy Education,” presented by Ola Ozernov-Palchik, a research scientist at the McGovern Institute for Brain Research, proposed a refinement for AI-tutors for pK-7 students to potentially decrease literacy disparities.

“Developing jam_bots: Real-Time Collaborative Agents for Live Human-AI Musical Improvisation,” presented by Anna Huang, assistant professor of music and assistant professor of electrical engineering and computer science, and Joe Paradiso, the Alexander W. Dreyfoos (1954) Professor in Media Arts and Sciences at the MIT Media Lab, aims to enhance human-AI musical collaboration in real-time for live concert improvisation.

“GENIUS: GENerative Intelligence for Urban Sustainability,” presented by Norhan Bayomi, a postdoc at the MIT Environmental Solutions Initiative and a research assistant in the Urban Metabolism Group, which aims to address the critical gap of a standardized approach in evaluating and benchmarking cities’ climate policies.

Georgia Perakis, the John C Head III Dean (Interim) of the MIT Sloan School of Management and professor of operations management, operations research, and statistics, who serves as co-chair of the GenAI Dean’s oversight group with Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, ended the event with closing remarks that emphasized “the readiness and eagerness of our community to lead in this space.”

“This is only the beginning,” he continued. “We are at the front edge of a historic moment — one where MIT has the opportunity, and the responsibility, to shape the future of generative AI with purpose, with excellence, and with care.”

How the brain solves complicated problems

The human brain is very good at solving complicated problems. One reason for that is that humans can break problems apart into manageable subtasks that are easy to solve one at a time.

This allows us to complete a daily task like going out for coffee by breaking it into steps: getting out of our office building, navigating to the coffee shop, and once there, obtaining the coffee. This strategy helps us to handle obstacles easily. For example, if the elevator is broken, we can revise how we get out of the building without changing the other steps.

While there is a great deal of behavioral evidence demonstrating humans’ skill at these complicated tasks, it has been difficult to devise experimental scenarios that allow precise characterization of the computational strategies we use to solve problems.

In a new study, MIT researchers have successfully modeled how people deploy different decision-making strategies to solve a complicated task — in this case, predicting how a ball will travel through a maze when the ball is hidden from view. The human brain cannot perform this task perfectly because it is impossible to track all of the possible trajectories in parallel, but the researchers found that people can perform reasonably well by flexibly adopting two strategies known as hierarchical reasoning and counterfactual reasoning.

The researchers were also able to determine the circumstances under which people choose each of those strategies.

“What humans are capable of doing is to break down the maze into subsections, and then solve each step using relatively simple algorithms. Effectively, when we don’t have the means to solve a complex problem, we manage by using simpler heuristics that get the job done,” says Mehrdad Jazayeri, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, an investigator at the Howard Hughes Medical Institute, and the senior author of the study.

Mahdi Ramadan PhD ’24 and graduate student Cheng Tang are the lead authors of the paper, which appears today in Nature Human Behavior. Nicholas Watters PhD ’25 is also a co-author.

Rational strategies

When humans perform simple tasks that have a clear correct answer, such as categorizing objects, they perform extremely well. When tasks become more complex, such as planning a trip to your favorite cafe, there may no longer be one clearly superior answer. And, at each step, there are many things that could go wrong. In these cases, humans are very good at working out a solution that will get the task done, even though it may not be the optimal solution.

Those solutions often involve problem-solving shortcuts, or heuristics. Two prominent heuristics humans commonly rely on are hierarchical and counterfactual reasoning. Hierarchical reasoning is the process of breaking down a problem into layers, starting from the general and proceeding toward specifics. Counterfactual reasoning involves imagining what would have happened if you had made a different choice. While these strategies are well-known, scientists don’t know much about how the brain decides which one to use in a given situation.

“This is really a big question in cognitive science: How do we problem-solve in a suboptimal way, by coming up with clever heuristics that we chain together in a way that ends up getting us closer and closer until we solve the problem?” Jazayeri says.

To overcome this, Jazayeri and his colleagues devised a task that is just complex enough to require these strategies, yet simple enough that the outcomes and the calculations that go into them can be measured.

The task requires participants to predict the path of a ball as it moves through four possible trajectories in a maze. Once the ball enters the maze, people cannot see which path it travels. At two junctions in the maze, they hear an auditory cue when the ball reaches that point. Predicting the ball’s path is a task that is impossible for humans to solve with perfect accuracy.

“It requires four parallel simulations in your mind, and no human can do that. It’s analogous to having four conversations at a time,” Jazayeri says. “The task allows us to tap into this set of algorithms that the humans use, because you just can’t solve it optimally.”

The researchers recruited about 150 human volunteers to participate in the study. Before each subject began the ball-tracking task, the researchers evaluated how accurately they could estimate timespans of several hundred milliseconds, about the length of time it takes the ball to travel along one arm of the maze.

For each participant, the researchers created computational models that could predict the patterns of errors that would be seen for that participant (based on their timing skill) if they were running parallel simulations, using hierarchical reasoning alone, counterfactual reasoning alone, or combinations of the two reasoning strategies.

The researchers compared the subjects’ performance with the models’ predictions and found that for every subject, their performance was most closely associated with a model that used hierarchical reasoning but sometimes switched to counterfactual reasoning.

That suggests that instead of tracking all the possible paths that the ball could take, people broke up the task. First, they picked the direction (left or right), in which they thought the ball turned at the first junction, and continued to track the ball as it headed for the next turn. If the timing of the next sound they heard wasn’t compatible with the path they had chosen, they would go back and revise their first prediction — but only some of the time.

Switching back to the other side, which represents a shift to counterfactual reasoning, requires people to review their memory of the tones that they heard. However, it turns out that these memories are not always reliable, and the researchers found that people decided whether to go back or not based on how good they believed their memory to be.

“People rely on counterfactuals to the degree that it’s helpful,” Jazayeri says. “People who take a big performance loss when they do counterfactuals avoid doing them. But if you are someone who’s really good at retrieving information from the recent past, you may go back to the other side.”

Human limitations

To further validate their results, the researchers created a machine-learning neural network and trained it to complete the task. A machine-learning model trained on this task will track the ball’s path accurately and make the correct prediction every time, unless the researchers impose limitations on its performance.

When the researchers added cognitive limitations similar to those faced by humans, they found that the model altered its strategies. When they eliminated the model’s ability to follow all possible trajectories, it began to employ hierarchical and counterfactual strategies like humans do. If the researchers reduced the model’s memory recall ability, it began to switch to hierarchical only if it thought its recall would be good enough to get the right answer — just as humans do.

“What we found is that networks mimic human behavior when we impose on them those computational constraints that we found in human behavior,” Jazayeri says. “This is really saying that humans are acting rationally under the constraints that they have to function under.”

By slightly varying the amount of memory impairment programmed into the models, the researchers also saw hints that the switching of strategies appears to happen gradually, rather than at a distinct cut-off point. They are now performing further studies to try to determine what is happening in the brain as these shifts in strategy occur.

The research was funded by a Lisa K. Yang ICoN Fellowship, a Friends of the McGovern Institute Student Fellowship, a National Science Foundation Graduate Research Fellowship, the Simons Foundation, the Howard Hughes Medical Institute, and the McGovern Institute.

How the brain distinguishes between ambiguous hypotheses

When navigating a place that we’re only somewhat familiar with, we often rely on unique landmarks to help make our way. However, if we’re looking for an office in a brick building, and there are many brick buildings along our route, we might use a rule like looking for the second building on a street, rather than relying on distinguishing the building itself.

Man seated on staircase, smiling at camera
McGovern Investigator Mark Harnett. Photo: Adam Glanzman

Until that ambiguity is resolved, we must hold in mind that there are multiple possibilities (or hypotheses) for where we are in relation to our destination. In a study of mice, MIT neuroscientists have now discovered that these hypotheses are explicitly represented in the brain by distinct neural activity patterns.

This is the first time that neural activity patterns that encode simultaneous hypotheses have been seen in the brain. The researchers found that these representations, which were observed in the brain’s retrosplenial cortex (RSC), not only encode hypotheses but also could be used by the animals to choose the correct way to go.

“As far as we know, no one has shown in a complex reasoning task that there’s an area in association cortex that holds two hypotheses in mind and then uses one of those hypotheses, once it gets more information, to actually complete the task,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Jakob Voigts PhD ’17, a former postdoc in Harnett’s lab and now a group leader at the Howard Hughes Medical Institute Janelia Research Campus, is the lead author of the paper, which appears today in Nature Neuroscience.

Ambiguous landmarks

The RSC receives input from the visual cortex, the hippocampal formation, and the anterior thalamus, which it integrates to help guide navigation.

In a 2020 paper, Harnett’s lab found that the RSC uses both visual and spatial information to encode landmarks used for navigation. In that study, the researchers showed that neurons in the RSC of mice integrate visual information about the surrounding environment with spatial feedback of the mice’s own position along a track, allowing them to learn where to find a reward based on landmarks that they saw.

In their new study, the researchers wanted to delve further into how the RSC uses spatial information and situational context to guide navigational decision-making. To do that, the researchers devised a much more complicated navigational task than typically used in mouse studies. They set up a large, round arena, with 16 small openings, or ports, along the side walls. One of these openings would give the mice a reward when they stuck their nose through it. In the first set of experiments, the researchers trained the mice to go to different reward ports indicated by dots of light on the floor that were only visible when the mice get close to them.

Man in blue shirt wearing glasses building a platform in a lab setting.
Jakob Voigts PhD ’17, at work in Mark Harnett’s lab. Photo: Justin Knight

Once the mice learned to perform this relatively simple task, the researchers added a second dot. The two dots were always the same distance from each other and from the center of the arena. But now the mice had to go to the port by the counterclockwise dot to get the reward. Because the dots were identical and only became visible at close distances, the mice could never see both dots at once and could not immediately determine which dot was which.

To solve this task, mice therefore had to remember where they expected a dot to show up, integrating their own body position, the direction they were heading, and path they took to figure out which landmark is which. By measuring RSC activity as the mice approached the ambiguous landmarks, the researchers could determine whether the RSC encodes hypotheses about spatial location. The task was carefully designed to require the mice to use the visual landmarks to obtain rewards, instead of other strategies like odor cues or dead reckoning.

“What is important about the behavior in this case is that mice need to remember something and then use that to interpret future input,” says Voigts, who worked on this study while a postdoc in Harnett’s lab.

“It’s not just remembering something, but remembering it in such a way that you can act on it.” – Jakob Voigts

The researchers found that as the mice accumulated information about which dot might be which, populations of RSC neurons displayed distinct activity patterns for incomplete information. Each of these patterns appears to correspond to a hypothesis about where the mouse thought it was with respect to the reward.

When the mice get close enough to figure out which dot was indicating the reward port, these patterns collapsed into the one that represents the correct hypothesis. The findings suggest that these patterns not only passively store hypotheses, they can also be used to compute how to get to the correct location, the researchers say.

“We show that RSC has the required information for using this short-term memory to distinguish the ambiguous landmarks. And we show that this type of hypothesis is encoded and processed in a way that allows the RSC to use it to solve the computation,” Voigts says.

Interconnected neurons

When analyzing their initial results, Harnett and Voigts consulted with MIT Professor Ila Fiete, who had run a study about 10 years ago using an artificial neural network to perform a similar navigation task.

That study, previously published on bioRxiv, showed that the neural network displayed activity patterns that were conceptually similar to those seen in the animal studies run by Harnett’s lab. The neurons of the artificial neural network ended up forming highly interconnected low-dimensional networks, like the neurons of the RSC.

“That interconnectivity seems, in ways that we still don’t understand, to be key to how these dynamics emerge and how they’re controlled. And it’s a key feature of how the RSC holds these two hypotheses in mind at the same time,” Harnett says.

In his lab at Janelia, Voigts now plans to investigate how other brain areas involved in navigation, such as the prefrontal cortex, are engaged as mice explore and forage in a more naturalistic way, without being trained on a specific task.

“We’re looking into whether there are general principles by which tasks are learned,” Voigts says. “We have a lot of knowledge in neuroscience about how brains operate once the animal has learned a task, but in comparison we know extremely little about how mice learn tasks or what they choose to learn when given freedom to behave naturally.”

The research was funded, in part, by the National Institutes of Health, a Simons Center for the Social Brain at MIT postdoctoral fellowship, the National Institute of General Medical Sciences, and the Center for Brains, Minds, and Machines at MIT, funded by the National Science Foundation.

Rationale engineering generates a compact new tool for gene therapy

Scientists at the McGovern Institute and the Broad Institute of MIT and Harvard have reengineered a compact RNA-guided enzyme they found in bacteria into an efficient, programmable editor of human DNA. The protein they created, called NovaIscB, can be adapted to make precise changes to the genetic code, modulate the activity of specific genes, or carry out other editing tasks. Because its small size simplifies delivery to cells, NovaIscB’s developers say it is a promising candidate for developing gene therapies to treat or prevent disease.

The study was led by McGovern Institute investigator Feng Zhang, who is also the James and Patricia Poitras Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and a core member of the Broad Institute. Zhang and his team reported their work today in the journal Nature Biotechnology.

Compact tools

NovaIscB is derived from a bacterial DNA cutter that belongs to a family of proteins called IscBs, which Zhang’s lab discovered in 2021. IscBs are a type of OMEGA system, the evolutionary ancestors to Cas9, which is part of the bacterial CRISPR system that Zhang and others have developed into powerful genome-editing tools. Like Cas9, IscB enzymes cut DNA at sites specified by an RNA guide. By reprogramming that guide, researchers can redirect the enzymes to target sequences of their choosing.

IscBs had caught the team’s attention not only because they share key features of CRISPR’s DNA-cutting Cas9, but also because they are a third of its size. That would be an advantage for potential gene therapies: Compact tools are easier to deliver to cells, and with a small enzyme, researchers would have more flexibility to tinker, potentially adding new functionalities without creating tools that were too bulky for clinical use.

From their initial studies of IscBs, researchers in Zhang’s lab knew that some members of the family could cut DNA targets in human cells. None of the bacterial proteins worked well enough to be deployed therapeutically, however: The team would have to modify an IscB to ensure it could edit targets in human cells efficiently without disturbing the rest of the genome.

To begin that engineering process, Soumya Kannan, a graduate student in Zhang’s lab who is now a junior fellow at the Harvard Society of Fellows, and postdoctoral fellow Shiyou Zhu first searched for an IscB that would make good starting point. They tested nearly 400 different IscB enzymes that can be found in bacteria. Ten were capable of editing DNA in human cells.

Even the most active of those would need to be enhanced to make it a useful genome editing tool. The challenge would be increasing the enzyme’s activity, but only at the sequences specified by its RNA guide. If the enzyme became more active, but indiscriminately so, it would cut DNA in unintended places. “The key is to balance the improvement of both activity and specificity at the same time,” explains Zhu.

Zhu notes that bacterial IscBs are directed to their target sequences by relatively short RNA guides, which makes it difficult to restrict the enzyme’s activity to a specific part of the genome. If an IscB could be engineered to accommodate a longer guide, it would be less likely to act on sequences beyond its intended target.

To optimize IscB for human genome editing, the team leveraged information that graduate student Han Altae-Tran, who is now a postdoctoral fellow at the University of Washington, had learned about the diversity of bacterial IscBs and how they evolved. For instance, the researchers noted that IscBs that worked in human cells included a segment they called REC, which was absent in other IscBs. They suspected the enzyme might need that segment to interact with the DNA in human cells. When they took a closer look at the region, structural modeling suggested that by slightly expanding part of the protein, REC might also enable IscBs to recognize longer RNA guides.

Based on these observations, the team experimented with swapping in parts of REC domains from different IscBs and Cas9s, evaluating how each change impacted the protein’s function. Guided by their understanding of how IscBs and Cas9s interact with both DNA and their RNA guides, the researchers made additional changes, aiming to optimize both efficiency and specificity.

In the end, they generated a protein they called NovaIscB, which was over 100 times more active in human cells than the IscB they had started with and that had demonstrated good specificity for its targets.

Kannan and Zhu constructed and screened hundreds of new IscBs before arriving at NovaIscB—and every change they made to the original protein was strategic. Their efforts were guided by their team’s knowledge of IscBs’ natural evolution as well as predictions of how each alteration would impact the protein’s structure, made using an artificial intelligence tool called AlphaFold2. Compared to traditional methods of introducing random changes into a protein and screening for their effects, this rational engineering approach greatly accelerated the team’s ability to identify a protein with the features they were looking for.

The team demonstrated that NovaIscB is a good scaffold for a variety of genome editing tools. “It biochemically functions very similarly to Cas9, and that makes it easy to port over tools that were already optimized with the Cas9 scaffold,” Kannan says. With different modifications, the researchers used NovaIscB to replace specific letters of the DNA code in human cells and to change the activity of targeted genes.

Importantly, the NovaIscB-based tools are compact enough to be easily packaged inside a single adeno-associated virus (AAV)—the vector most commonly used to safely deliver gene therapy to patients. Because they are bulkier, tools developed using Cas9 can require a more complicated delivery strategy.

Demonstrating NovaIscB’s potential for therapeutic use, Zhang’s team created a tool called OMEGAoff that adds chemical markers to DNA to dial down the activity of specific genes. They programmed OMEGAoff to repress a gene involved in cholesterol regulation, then used AAV to deliver the system to the livers of mice, leading to lasting reductions in cholesterol levels in the animals’ blood.

The team expects that NovaIscB can be used to target genome editing tools to most human genes, and look forward to seeing how other labs deploy the new technology. They also hope others will adopt their evolution-guided approach to rational protein engineering. “Nature has such diversity and its systems have different advantages and disadvantages,” Zhu says. “By learning about that natural diversity, we can make the systems we are trying to engineer better and better.”

This study was funded in part by the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, and J. and P. Poitras.

Daily mindfulness practice reduces anxiety for autistic adults

Just ten to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.

Mindfulness is a state in which the mind is focused only on the present moment. It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises—and evidence is accumulating that practicing mindfulness has positive effects on mental health. The new study, reported April 8, 2025, in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for autistic adults.

“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern Investigator John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab. “Every measure that we had of well-being moved in significantly in a positive direction,” adds Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

One of the reported benefits of practicing mindfulness is that it can reduce the symptoms of anxiety disorders. This prompted Gabrieli and his colleagues to wonder whether it might benefit adults with autism, who tend to report above average levels of anxiety and stress, which can interfere with daily living and quality of life. As many as 65 percent of autistic adults may also have an anxiety disorder.

Gabrieli adds that the opportunity for autistic adults to practice mindfulness with an app, rather than needing to meet with a teacher or class, seemed particularly promising. “The capacity to do it at your own pace in your own home, or any environment you like, might be good for anybody,” he says. “But maybe especially for people for whom social interactions can sometimes be challenging.”

The research team, including first author Cindy Li, the Autism Recruitment and Outreach Coordinator in Gabrieli’s lab, recruited 89 autistic adults to participate in their study. Those individuals were split into two groups: One would try the mindfulness practice for six weeks, while the others would wait and try the intervention later.

Participants were asked to practice daily using an app called Healthy Minds, which guides participants through seated or active mediations, each lasting 10 to 15 minutes. Participants reported that they found the app easy to use and had little trouble making time for the daily practice.

After six weeks, participants reported significant reductions in anxiety and perceived stress. These changes were not experienced by the wait-list group, which served as a control. However, after their own six weeks of practice, people in the wait-list group reported similar benefits. “We replicated the result almost perfectly. Every positive finding we found with the first sample we found with the second sample,” Gabrieli says.

The researchers followed up with study participants after another six weeks. Almost everyone had discontinued their mindfulness practice—but remarkably, their gains in well-being had persisted. Based on this finding, the team is eager to further explore the long-term effects of mindfulness practice in future studies. “There’s a hypothesis that a benefit of gaining mindfulness skills or habits is they stick with you over time—that they become incorporated in your daily life,” Gabrieli says. “If people are using the approach to being in the present and not dwelling on the past or worrying about the future, that’s what you want most of all. It’s a habit of thought that’s powerful and helpful.”

Even as they plan future studies, the researchers say they are already convinced that mindfulness practice can have clear benefits for autistic adults. “It’s possible mindfulness would be helpful at all kinds of ages,” Gabrieli says. But he points out the need is particularly great for autistic adults, who usually have fewer resources and support than autistic children have access to through their schools. Gabrieli is eager for more people with autism to try the Healthy Minds app. “Having scientifically proven resources for adults who are no longer in school systems might be a valuable thing,” he says.

This research was funded in part by The Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT and the Yang Tan Collective.

A visual pathway in the brain may do more than recognize objects

When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.

However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.

“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.

Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.

Beyond object recognition

When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.

Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.

The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.

However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.

“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.

To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.

The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.

The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.

“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”

Comparing models

The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.

“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”

The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.

“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”

In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.

“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.

The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.

Twenty-five years after its founding, the McGovern Institute is shaping brain science and improving human lives at a global scale

In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity and to leverage that understanding for the betterment of humanity.

Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.

In the Beginning

“This is by any measure a truly historic moment for MIT,” said MIT’s 15th President Charles M. Vest during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”

Vest tapped Phillip A. Sharp, MIT Institute Professor Emeritus of Biology and Nobel laureate, to lead the institute and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty.  Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.

Group photo in front of construction sign.
Patrick J. McGovern ’59 and Lore Harp McGovern gather with faculty members and MIT administration at the groundbreaking of MIT Building 46 in 2003. Photo: Donna Coveney

Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT,  succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.

A Quarter Century of Innovation

On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his twentieth year as director of the institute.

Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.

Synergy and Open Science

“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”

Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.

Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.

The McGovern Legacy  

Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people, the young researchers, that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Nancy Kanwisher ’80 PhD ’86, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ‘07, Josh McDermott PhD ‘06, and the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, Rebecca Saxe PhD ‘03, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.

Group photo of four smiling scientists.
Nancy Kanwisher (center) with former students-turned-colleagues Evelina Fedorenko (left), Josh McDermott, and Rebecca Saxe (right). Photo: Steph Stevens

Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.

“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution – onward to the next 25 years!”