Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Magnetic robots walk, crawl, and swim

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

“This is the first time this has been done, to be able to control three-dimensional locomotion of robots with a one-dimensional magnetic field,” says McGovern associate investigator Polina Anikeeva, whose team reported on the magnetic robots June 3, 2023, in the journal Advanced Materials. “And because they are predominantly composed of polymer and polymers are soft, you don’t need a very large magnetic field to activate them. It’s actually a really tiny magnetic field that drives these robots,” says Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences at MIT, as well as the associate director of MIT’s Research Laboratory of Electronics and director of MIT’s K. Lisa Yang Brain-Body Center.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

The new robots are well suited to transport cargo through confined spaces and their rubber bodies are gentle on fragile environments, opening the possibility that the technology could be developed for biomedical applications. Anikeeva and her team have made their robots millimeters long, but she says the same approach could be used to produce much smaller robots.

Engineering magnetic robots

Anikeeva says that until now, magnetic robots have moved in response to moving magnetic fields. She explains that for these models, “if you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet.” That limits the settings in which such robots might be deployed. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution. You want to be able to have a stationary instrument that just applies magnetic field to the whole sample,” she explains.

Youngbin Lee, a former graduate student in Anikeeva’s lab, engineered a solution to this problem. The robots he developed in Anikeeva’s lab are not uniformly magnetized. Instead, they are strategically magnetized in different zones and directions so a single magnetic field can enable a movement-driving profile of magnetic forces.

Before they are magnetized, however, the flexible, lightweight bodies of the robots must be fabricated. Lee starts this process with two kinds of rubber, each with a different stiffness. These are sandwiched together, then heated and stretched into a long, thin fiber. Because of the two materials’ different properties, one of the rubbers retains its elasticity through this stretching process, but the other deforms and cannot return to its original size. So when the strain is released, one layer of the fiber contracts, tugging on the other side and pulling the whole thing into a tight coil. Anikeeva says the helical fiber is modeled after the twisty tendrils of a cucumber plant, which spiral when one layer of cells loses water and contracts faster than a second layer.

A third material—one whose particles have the potential to become magnetic—is incorporated in a channel that runs through the rubbery fiber. So once the spiral has been made, a magnetization pattern that enables a particular type of movement can be introduced.

“Youngbin thought very carefully about how to magnetize our robots to make them able to move just as he programmed them to move,” Anikeeva says. “He made calculations to determine how to establish such a profile of forces on it when we apply a magnetic field that it will actually start walking or crawling.”

To form a caterpillar-like crawling robot, for example, the helical fiber is shaped into gentle undulations, and then the body, head, and tail are magnetized so that a magnetic field applied perpendicular to the robot’s plane of motion will cause the body to compress. When the field is reduced to zero, the compression is released, and the crawling robot stretches. Together, these movements propel the robot forward. Another robot in which two foot-like helical fibers are connected with a joint is magnetized in a pattern that enables a movement more like walking.

Biomedical potential

This precise magnetization process generates a program for each robot and ensures that that once the robots are made, they are simple to control. A weak magnetic field activates each robot’s program and drives its particular type of movement. A single magnetic field can even send multiple robots moving in opposite directions, if they have been programmed to do so. The team found that one minor manipulation of the magnetic field has a useful effect: With the flip of a switch to reverse the field, a cargo-carrying robot can be made to gently shake and release its payload.

Anikeeva says she can imagine these soft-bodied robots—whose straightforward production will be easy to scale up—delivering materials through narrow pipes or even inside the human body. For example, they might carry a drug through narrow blood vessels, releasing it exactly where it is needed. She says the magnetically-actuated devices have biomedical potential beyond robots as well, and might one day be incorporated into artificial muscles or materials that support tissue regeneration.

Refining mental health diagnoses

Maedbh King came to MIT to make a difference in mental health. As a postdoctoral fellow in the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, she is building computer models aimed at helping clinicians improve diagnosis and treatment, especially for young people with neurodevelopmental and psychiatric disorders.

Tapping two large patient-data sources, King is working to analyze critical biological and behavioral information to better categorize patients’ mental health conditions, including autism spectrum disorder, attention-deficit hyperactivity disorder (ADHD), anxiety, and suicidal thoughts — and to provide more predictive approaches to addressing them. Her strategy reflects the center’s commitment to a holistic understanding of human brain function using theoretical and computa-tional neuroscience.

“Today, treatment decisions for psychiatric disorders are derived entirely from symptoms, which leaves clinicians and patients trying one treatment and, if it doesn’t work, trying another,” says King. “I hope to help change that.”

King grew up in Dublin, Ireland, and studied psychology in college; gained neuroimaging and programming skills while earning a master’s degree from Western University in Canada; and received her doctorate from the University of California, Berkeley, where she built maps and models of the human brain. In fall 2022, King joined the lab of Satrajit Ghosh, a McGovern Institute principal research scientist whose team uses neuroimaging, speech communication, and machine learning to improve assessments and treatments for mental health and neurological disorders.

Big-data insights

King is pursuing several projects using the Healthy Brain Network, a landmark mental health study of children and adolescents in New York City. She and lab colleagues are extracting data from cognitive and other assessments — such as language patterns, favorite school subjects, and family mental illness history — from roughly 4,000 participants to provide more-nuanced understanding of their neurodevelopmental disorders, such as autism or ADHD.

“Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records,” says King.

With this database, one can develop “very rich clinical profiles of these young people,” including their challenges and adaptive strengths, King explains. “We’re interested in placing these participants within a spectrum of symptoms, rather than just providing a binary label of, ‘has this disorder’ or ‘doesn’t have it.’ It’s an effort to subtype based on these phenotypic assessments.”

In other research, King is developing tools to detect risk factors for suicide among adolescents. Working with psychiatrists at Children’s Hospital of Philadelphia, she is using detailed questionnaires from some 20,000 youths who visited the hospital’s emergency department over several years; about one-tenth had tried to take their own lives. The questionnaires collect information about demographics, lifestyle, relationships, and other aspects of patients’ lives.

“One of the big questions the physicians want to answer is, Are there any risk predictors we can identify that can ultimately prevent, or at least mitigate, future suicide attempts?” King says. “Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records.”

King is passionate about producing findings to help practitioners, whether they’re clinicians, teachers, parents, or policy makers, and the populations they’re studying. “This applied work,” she says, “should be communicated in a way that can be useful.

When computer vision works more like a brain, it sees more like people do

From cameras to self-driving cars, many of today’s technologies depend on artificial intelligence (AI) to extract meaning from visual information.  Today’s AI technology has artificial neural networks at its core, and most of the time we can trust these AI computer vision systems to see things the way we do — but sometimes they falter. According to MIT and IBM Research scientists, one way to improve computer vision is to instruct the artificial neural networks that they rely on to deliberately mimic the way the brain’s biological neural network processes visual images.

Researchers led by James DiCarlo, the director of MIT’s Quest for Intelligence and member of the MIT-IBM Watson AI Lab, have made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition. This May, at the International Conference on Learning Representations (ICLR), the team reported that when they trained an artificial neural network using neural activity patterns in the brain’s inferior temporal (IT) cortex, the artificial neural network was more robustly able to identify objects in images than a model that lacked that neural training. And the model’s interpretations of images more closely matched what humans saw, even when images included minor distortions that made the task more difficult.

Comparing neural circuits

Portrait of Professor DiCarlo
McGovern Investigator and Director of MIT Quest for Intelligence, James DiCarlo. Photo: Justin Knight

Many of the artificial neural networks used for computer vision already resemble the multi-layered brain circuits that process visual information in humans and other primates. Like the brain, they use neuron-like units that work together to process information. As they are trained for a particular task, these layered components collectively and progressively process the visual information to complete the task — determining for example, that an image depicts a bear or a car or a tree.

DiCarlo and others previously found that when such deep-learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains. That is, they turn out to be surprisingly good scientific models of the neural mechanisms underlying primate and human vision.

That resemblance is helping neuroscientists deepen their understanding of the brain. By demonstrating ways visual information can be processed to make sense of images, computational models suggest hypotheses about how the brain might accomplish the same task. As developers continue to refine computer vision models, neuroscientists have found new ideas to explore in their own work.

“As vision systems get better at performing in the real world, some of them turn out to be more human-like in their internal processing. That’s useful from an understanding biology point of view,” says DiCarlo, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.

Engineering more brain-like AI

While their potential is promising, computer vision systems are not yet perfect models of human vision. DiCarlo suspected one way to improve computer vision may be to incorporate specific brain-like features into these models.

To test this idea, he and his collaborators built a computer vision model using neural data previously collected from vision-processing neurons in the monkey IT cortex — a key part of the primate ventral visual pathway involved in the recognition of objects — while the animals viewed various images. More specifically, Joel Dapello, a Harvard graduate student and former MIT-IBM Watson AI Lab intern, and Kohitij Kar, Assistant Professor, Canada Research Chair (Visual Neuroscience) at York University and visiting scientist at MIT, in collaboration with David Cox, IBM Research’s VP for AI Models and IBM director of the MIT-IBM Watson AI Lab, and other researchers at IBM Research and MIT, asked an artificial neural network to emulate the behavior of these primate vision-processing neurons while the network learned to identify objects in a standard computer vision task.

“In effect, we said to the network, ‘please solve this standard computer vision task, but please also make the function of one of your inside simulated “neural” layers be as similar as possible to the function of the corresponding biological neural layer,’” DiCarlo explains. “We asked it to do both of those things as best it could.” This forced the artificial neural circuits to find a different way to process visual information than the standard, computer vision approach, he says.

After training the artificial model with biological data, DiCarlo’s team compared its activity to a similarly-sized neural network model trained without neural data, using the standard approach for computer vision. They found that the new, biologically-informed model IT layer was – as instructed — a better match for IT neural data.  That is, for every image tested, the population of artificial IT neurons in the model responded more similarly to the corresponding population of biological IT neurons.

“Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says.

The researchers also found that the model IT was also a better match to IT neural data collected from another monkey, even though the model had never seen data from that animal, and even when that comparison was evaluated on that monkey’s IT responses to new images. This indicated that the team’s new, “neurally-aligned” computer model may be an improved model of the neurobiological function of the primate IT cortex — an interesting finding, given that it was previously unknown whether the amount of neural data that can be currently collected from the primate visual system is capable of directly guiding model development.

With their new computer model in hand, the team asked whether the “IT neural alignment” procedure also leads to any changes in the overall behavioral performance of the model. Indeed, they found that the neurally-aligned model was more human-like in its behavior — it tended to succeed in correctly categorizing objects in images for which humans also succeed, and it tended to fail when humans also fail.

Adversarial attacks

The team also found that the neurally-aligned model was more resistant to “adversarial attacks” that developers use to test computer vision and AI systems.  In computer vision, adversarial attacks introduce small distortions into images that are meant to mislead an artificial neural network.

“Say that you have an image that the model identifies as a cat. Because you have the knowledge of the internal workings of the model, you can then design very small changes in the image so that the model suddenly thinks it’s no longer a cat,” DiCarlo explains.

These minor distortions don’t typically fool humans, but computer vision models struggle with these alterations. A person who looks at the subtly distorted cat still reliably and robustly reports that it’s a cat. But standard computer vision models are more likely to mistake the cat for a dog, or even a tree.

“There must be some internal differences in the way our brains process images that lead to our vision being more resistant to those kinds of attacks,” DiCarlo says. And indeed, the team found that when they made their model more neurally-aligned, it became more robust, correctly identifying more images in the face of adversarial attacks.  The model could still be fooled by stronger “attacks,” but so can people, DiCarlo says. His team is now exploring the limits of adversarial robustness in humans.

A few years ago, DiCarlo’s team found they could also improve a model’s resistance to adversarial attacks by designing the first layer of the artificial network to emulate the early visual processing layer in the brain. One key next step is to combine such approaches — making new models that are simultaneously neurally-aligned at multiple visual processing layers.

The new work is further evidence that an exchange of ideas between neuroscience and computer science can drive progress in both fields. “Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says. “In this case, computer vision and AI researchers get new ways to achieve robustness and neuroscientists and cognitive scientists get more accurate mechanistic models of human vision.”

This work was supported by the MIT-IBM Watson AI Lab, Semiconductor Research Corporation, DARPA, the Massachusetts Institute of Technology Shoemaker Fellowship, Office of Naval Research, the Simons Foundation, and Canada Research Chair Program.

PhD student Wei-Chen Wang is moved to help people heal

This story originally appeared in the Spring 2023 issue of Spectrum.

___

When he turned his ankle five years ago as an undergraduate playing pickup basketball at the University of Illinois, Wei-Chen (Eric) Wang SM ’22 knew his life would change in certain ways. For one thing, Wang, then a computer science major, wouldn’t be playing basketball anytime soon. He also assumed, correctly, that he might require physical therapy (PT).

What he did not foresee was that this minor injury would influence his career trajectory. While lying on the PT bench, Wang began to wonder: “Can I replicate what the therapist is doing using a robot?” It was an idle thought at the time. Today, however, his research involves robots and movement, closely related to what had seemed a passing fancy.

Wang continued his focus on computer science as an MIT graduate student, receiving his master’s in 2022 before deciding to pursue work of a more applied nature. He met Nidhi Seethapathi, who had joined MIT’s faculty as an assistant professor in electrical engineering and computer science and brain and cognitive science a few months earlier, and was intrigued by the notion of creating robots that could illuminate the key principles of movement—knowledge that might someday help people regain the ability to move comfortably after suffering from injury, stroke, or disease.

As the first PhD student in Seethapathi’s group and a MathWorks Fellow, Wang is charged with building machine learning-based models that can accurately predict and reproduce human movements. He will then use computer-simulated environments to visualize and evaluate the performance of these models.

To begin, he needs to gather data about specific human movements. One potential data collection method involves the placement of sensors or markers on different parts of the body to pinpoint their precise positions at any given moment. He can then try to calculate those positions in the future, as dictated by the equations of motion in physics.

The other method relies on computer vision-powered software that can automatically convert video footage to motion data. Wang prefers the latter approach, which he considers more natural. “We just look at what humans are doing and try to learn from that directly,” he explains. That’s also where machine learning comes in. “We use machine-learning tools to extract data from the video, and those data become the input to our model,” he adds. The model, in this case, is just another term for the robot brain.

The near-term goal is not to make robots more natural, Wang notes. “We’re using [simulated] robots to understand how humans are moving and eventually to explain any kind of movement—or at least that’s the hope. That said, based on the general principles we’re able to abstract, we might someday build robots that can move more naturally.”

Wang is also collaborating on a project headed by postdoctoral fellow Antoine De Comité that focuses on robotic retrieval of objects—the movements required to remove books from a library shelf, for example, or to grab a drink from a refrigerator. While robots routinely excel at tasks such as grasping an object on a tabletop, performing naturalistic movements in three dimensions remains challenging.

Wang describes a video shown by a Stanford University scientist in which a robot destroyed a refrigerator while attempting to extract a beer. He and De Comité hope for better results with robots that have undergone reinforcement learning—an approach using deep learning in which desired motions are rewarded or reinforced whereas unwanted motions are discouraged.

If they succeed in designing a robot that can safely retrieve a beer, Wang says, then more important and delicate tasks could be within reach. Someday, a robot at PT might guide a patient through knee exercises or apply ultrasound to an arthritic elbow.

Francesca Riccio-Ackerman works to improve access to prosthetics

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In Sierra Leone, war and illness have left up to 40,000 people requiring orthotics and prosthetics services, but there is a profound lack of access to specialized care, says Francesca Riccio-Ackerman, a biomedical engineer and PhD student studying health equity and health systems. There is just one fully certified prosthetist available for the thousands of patients in the African nation who are living with amputation, she notes. The ideal number is one for every 250, according to the World Health Organization and the International Society of Orthotics and Prosthetics.

The data point is significant for Riccio-Ackerman, who conducts research in the MIT Media Lab’s Biomechatronics Group and in the K. Lisa Yang Center for Bionics, both of which aim to improve translation of assistive technologies to people with disabilities. “We’re really focused on improving and augmenting human mobility,” she says. For Riccio-Ackerman, part of the quest to improve human mobility means ensuring that the people who need access to prosthetic care can get it—for the duration of their lives.

“We’re really focused on improving and augmenting human mobility,” says Riccio-Ackerman.

In September 2021, the Yang Center provided funding for Riccio-Ackerman to travel to Sierra Leone, where she witnessed the lingering physical effects of a brutal decade-long civil war that ended in 2002. Prosthetic and orthotic care in the country, where a vast number of patients are also disabled by untreated polio or diabetes, has become more elusive, she says, as global media attention on the war’s aftermath has subsided. “People with amputation need low-level, consistent care for years. There really needs to be a long-term investment in improving this.”

Through the Yang Center and supported by a fellowship from the new MIT Morningside Academy for Design, Riccio-Ackerman is designing and building a sustainable care and delivery model in Sierra Leone that aims to multiply the production of prosthetic limbs and strengthen the country’s prosthetic sector. “[We’re working] to improve access to orthotic and prosthetic services,” she says.

She is also helping to establish a supply chain for prosthetic limb and orthotic brace parts and equipping clinics with machines and infrastructure to serve more patients. In January 2023, her team launched a four-year collaboration with the Sierra Leone Ministry of Health and Sanitation. One of the goals of the joint effort is to enable Sierra Leoneans to obtain professional prosthetics training, so they can care for their own community without leaving home.

From engineering to economics

Riccio-Ackerman was drawn to issues around human mobility after witnessing her aunt suffer from rheumatoid arthritis. “My aunt was young, but she looked like she was 80 or 90. She was sick, in pain, in a wheelchair— a young spirit in an old body,” she says.

As a biomedical engineering undergraduate student at Florida International University, Riccio-Ackerman worked on clinical trials for neural-enabled myoelectric arms controlled by nerves in the body. She says that the technology was thrilling yet heartbreaking. She would often have to explain to patients who participated in testing that they couldn’t take the devices home and that they may never be covered by insurance.

Riccio-Ackerman began asking questions: “What factors determine who gets an amputation? Why are we making devices that are so expensive and inaccessible?” This sense of injustice inspired her to pivot away from device design and toward a master’s degree in health economics and policy at the SDA Bocconi School of Management in Milan.

She began work as a research specialist with Hugh Herr SM ’93, professor of arts and sciences at the MIT Media Lab and codirector of the Yang Center, helping to study communities that were medically neglected in prosthetic care. “I knew that the devices weren’t getting to the people who need them, and I didn’t know if the best way to solve it was through engineering,” Riccio-Ackerman explains.

While Riccio-Ackerman’s PhD should be finished within three years, she’s only at the beginning of her health care equity work. “We’re forging ahead in Sierra Leone and thinking about translating our strategy and methodologies to other communities around the globe that could benefit,” she says. “We hope to be able to do this in many, many countries in the future.”

Bionics researchers develop technologies to ease pain and transcend human limitations

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In early December 2022, a middle-aged woman from California arrived at Boston’s Brigham and Women’s Hospital for the amputation of her right leg below the knee following an accident. This was no ordinary procedure. At the end of her remaining leg, surgeons attached a titanium fixture through which they threaded eight thin, electrically conductive wires. These flexible leads, implanted on her leg muscles, would, in the coming months, connect to a robotic, battery-powered prosthetic ankle and foot.

The goal of this unprecedented surgery, driven by MIT researchers from the K. Lisa Yang Center for Bionics at MIT, was the restoration of near-natural function to the patient, enabling her to sense and control the position and motion of her ankle and foot—even with her eyes closed.

In the K. Lisa Yang Center for Bionics, codirector Hugh Herr SM ’93 and graduate student Christopher Shallal are working to return mobility to people disabled by disease or physical trauma. Photo: Tony Luong

“The brain knows exactly how to control the limb, and it doesn’t matter whether it is flesh and bone or made of titanium, silicon, and carbon composite,” says Hugh Herr SM ’93, professor of media arts and sciences, head of the MIT Media Lab’s Biomechatronics Group, codirector of the Yang Center, and an associate member of MIT’s McGovern Institute for Brain Research.

For Herr, in attendance during that long day, the surgery represented a critical milestone in a decades-long mission to develop technologies returning mobility to people disabled by disease or physical trauma. His research combines a dizzying range of disciplines—electrical, mechanical, tissue, and biomedical engineering, as well as neuroscience and robotics—and has yielded pathbreaking results. Herr’s more than 100 patents include a computer-controlled knee and powered ankle-foot prosthesis and have enabled thousands of people around the world to live more on their own terms, including Herr.

Surmounting catastrophe

For much of Herr’s life, “go” meant “up.”

“Starting when I was eight, I developed an extraordinary passion, an absolute obsession, for climbing; it’s all I thought about in life,” says Herr. He aspired “to be the best climber in the world,” a goal he nearly achieved in his teenage years, enthralled by the “purity” of ascending mountains ropeless and solo in record times, by “a vertical dance, a balance between physicality and mind control.”

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

At 17, Herr became disoriented while climbing New Hampshire’s Mt. Washington during a blizzard. Days in the cold permanently damaged his legs, which had to be amputated below his knees. His rescue cost another man’s life, and Herr was despondent, disappointed in himself, and fearful for his future.

Then, following months of rehabilitation, he felt compelled to test himself. His first weekend home, when he couldn’t walk without canes and crutches, he headed back to the mountains. “I hobbled to the base of this vertical cliff and started ascending,” he recalls. “It brought me joy to realize that I was still me, the same person.”

But he also recognized that as a person with amputated limbs, he faced severe disadvantages. “Society doesn’t look kindly on people with unusual bodies; we are viewed as crippled and weak, and that did not sit well with me.” Unable to tolerate both the new physical and social constraints on his life, Herr determined to view his disability not as a loss but as an opportunity. “I think the rage was the catapult that led me to do something that was without precedent,” he says.

Lifelike limb

On hand in the surgical theater in December was a member of Herr’s Biomechatronics Group for whom the bionic limb procedure also held special resonance. Christopher Shallal, a second-year graduate student in the Harvard-MIT Health Sciences and Technology program who received bilateral lower limb amputations at birth, worked alongside surgeon Matthew Carty testing the electric leads before implantation in the patient. Shallal found this, his first direct involvement with a reconstruction surgery, deeply fulfilling.

“Ever since I was a kid, I’ve wanted to do medicine plus engineering,” says Shallal. “I’m really excited to work on this bionic limb reconstruction, which will probably be one of the most advanced systems yet in terms of neural interfacing and control, with a far greater range of motion possible.”

Hugh and Shallal are working on a next-generation, biomimetic limb with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb. Photo: Tony Luong

Like other Herr lab designs, the new prosthesis features onboard, battery-powered propulsion, microprocessors, and tunable actuators. But this next-generation, biomimetic limb represents a major leap forward, replacing electrodes sited on a patient’s skin, subject to sweat and other environmental threats, with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb.

This system takes advantage of a breakthrough technique invented several years ago by the Herr lab called CMI (for cutaneous mechanoneural interface), which constructs muscle-skin-nerve bundles at the amputation site. Muscle actuators controlled by computers on board the external prosthesis apply forces on skin cells implanted within the amputated residuum when a person with amputation touches an object with their prosthesis.

With CMI and electric leads connecting the prosthesis to these muscle actuators within the residual limb, the researchers hypothesize that a person with an amputation will be able to “feel” their prosthetic leg step onto the ground. This sensory capability is the holy grail for persons with major limb loss. After recovery from her surgery, the woman from California will be wearing Herr’s latest state-of-the-art prosthetic system in the lab.

‘Tinkering’ with the body

Not all artificial limbs emulate those that humans are born with. “You can make them however you want, swapping them in and out depending on what you want to do, and they can take you anywhere,” Herr says. Committed to extreme climbing even after his accident, Herr came up with special limbs that became a commercial hit early in his career. His designs made it possible for someone with amputated legs to run and dance.

But he also knew the day-to-day discomfort of navigating on flatter earth with most prostheses. He won his first patent during his senior year of college for a fluid-controlled socket attachment designed to reduce the pain of walking. Growing up in a Mennonite family skilled in handcrafting things they needed, and in a larger community that was disdainful of technology, Herr says he had “difficulty trusting machines.” Yet by the time he began his master’s program at MIT, intent on liberating persons with limb amputation to live more fully in the world, he had embraced the tools of science and engineering as the means to this end.

“I want to be in the business of designing not more and more powerful tools but designing new bodies,” says Hugh Herr.

For Shallal, Herr was an early icon, and his inventions and climbing exploits served as inspiration. “I’d known about Hugh since middle school; he was famous among those with amputations,” he says. “As a kid, I liked tinkering with things, and I kind of saw my body as a canvas, a place where I could explore different boundaries and expand possibilities for myself and others with amputations.” In school, Shallal sometimes encountered resistance to his prostheses. “People would say I couldn’t do certain things, like running and playing different sports, and I found these barriers frustrating,” he says. “I did things in my own way and didn’t want people to pity me.”

In fact, Shallal felt he could do some things better than his peers. In high school, he used a 3-D printer to make a mobile phone charger case he could plug into his prosthesis. “As a kid, I would wear long pants to hide my legs, but as the technology got cooler, I started wearing shorts,” he says. “I got comfortable and liked kind of showing off my legs.”

Global impact

December’s surgery was the first phase in the bionic limb project. Shallal will be following up with the patient over many months, ensuring that the connections between her limb and implanted sensors function and provide appropriate sensorimotor data for the built-in processor. Research on this and other patients to determine the impact of these limbs on gait and ease of managing slopes, for instance, will form the basis for Shallal’s dissertation.

“After graduation, I’d be really interested in translating technology out of the lab, maybe doing a startup related to neural interfacing technology,” he says. “I watched Inspector Gadget on television when I was a kid. Making the tool you need at the time you need it to fix problems would be my dream.”

Herr will be overseeing Shallal’s work, as well as a suite of research efforts propelled by other graduate students, postdocs, and research scientists that together promise to strengthen the technology behind this generation of biomimetic prostheses.

One example: devising an innovative method for measuring muscle length and velocity with tiny implanted magnets. In work published in November 2022, researchers including Herr; project lead Cameron Taylor SM ’16, PhD ’20, a research associate in the Biomechatronics Group; and Brown University partners demonstrated that this new tool, magnetomicrometry, yields the kind of high-resolution data necessary for even more precise bionic limb control. The Herr lab awaits FDA approval on human implantation of the magnetic beads.

These intertwined initiatives are central to the ambitious mission of the K. Lisa Yang Center for Bionics, established with a $24 million gift from Yang in 2021 to tackle transformative bionic interventions to address an extensive range of human limitations.

Herr is committed to making the broadest possible impact with his technologies. “Shoes and braces hurt, so my group is developing the science of comfort—designing mechanical parts that attach to the body and transfer loads without causing pain.” These inventions may prove useful not just to people living with amputation but to patients suffering from arthritis or other diseases affecting muscles, joints, and bones, whether in lower limbs or arms and hands.

The Yang Center aims to make prosthetic and orthotic devices more accessible globally, so Herr’s group is ramping up services in Sierra Leone, where civil war left tens of thousands missing limbs after devastating machete attacks. “We’re educating clinicians, helping with supply chain infrastructure, introducing novel assistive technology, and developing mobile delivery platforms,” he says.

In the end, says Herr, “I want to be in the business of designing not more and more powerful tools but designing new bodies.” Herr uses himself as an example: “I walk on two very powerful robots, but they’re not linked to my skeleton, or to my brain, so when I walk it feels like I’m on powerful machines that are not me. What I want is such a marriage between human physiology and electromechanics that a person feels at one with the synthetic, designed content of their body.” and control, with a far greater range of motion possible.”

Modeling the marvelous journey from A to B

This story originally appeared in the Spring 2023 issue of Spectrum.

___

Nidhi Seethapathi was first drawn to using powerful yet simple models to understand elaborate patterns when she learned about Newton’s laws of motion as a high school student in India. She was fascinated by the idea that wonderfully complex behaviors can arise from a set of objects that follow a few elementary rules.

Now an assistant professor at MIT, Seethapathi seeks to capture the intricacies of movement in the real world, using computational modeling as well as input from theory and experimentation. “[Theoretical physicist and Nobel laureate] Richard Feynman ’39 once said, ‘What I cannot create, I do not understand,’” Seethapathi says. “In that same spirit, the way I try to understand movement is by building models that move the way we do.”

Models of locomotion in the real world

Seethapathi—who holds a shared faculty position between the Department of Brain and Cognitive Sciences and the Department of Electrical Engineering and Computer Science’s Faculty of Artificial Intelligence + Decision- Making, which is housed in the Schwarzman College of Computing and the School of Engineering—recalls a moment during her undergraduate years studying mechanical engineering in Mumbai when a professor asked students to pick an aspect of movement to examine in detail. While most of her peers chose to analyze machines, Seethapathi selected the human hand. She was astounded by its versatility, she says, and by the number of variables, referred to by scientists as “degrees of freedom,” that are needed to characterize routine manual tasks. The assignment made her realize that she wanted to explore the diverse ways in which the entire human body can move.

Also an investigator at the McGovern Institute for Brain Research, Seethapathi pursued graduate research at The Ohio State University Movement Lab, where her goal was to identify the key elements of human locomotion. At that time, most people in the field were analyzing simple movements, she says, “but I was interested in broadening the scope of my models to include real-world behavior. Given that movement is so ubiquitous, I wondered: What can this model say about everyday life?”

After earning her PhD from Ohio State in 2018, Seethapathi continued this line of research as a postdoctoral fellow at the University of Pennsylvania. New computer vision tools to track human movement from video footage had just entered the scene, and during her time at UPenn, Seethapathi sought to expand her skillset to include computer vision and applications to movement rehabilitation.

At MIT, Seethapathi continues to extend the range of her studies of human movement, looking at how locomotion can evolve as people grow and age, and how it can adapt to anatomical changes and even adjust to shifts in weather, which can alter ground conditions. Her investigations now encompass other species as part of an effort to determine how creatures with different morphologies and habitats regulate their movements.

The models Seethapathi and her team create make predictions about human movements that can later be verified or refuted by empirical tests. While relatively simple experiments can be carried out on treadmills, her group is developing measurement systems incorporating wearable sensors and video-based sensing to measure movement data that have traditionally been hard to obtain outside the laboratory.

Although Seethapathi says she is primarily driven to uncover the fundamental principles that govern movement behavior, she believes her work also has practical applications.

“When people are treated for a movement disorder, the goal is to impact their movements in the real world,” she says. “We can use our predictive models to see how a particular intervention will affect a person’s trajectory. The hope is that our models can help put the individual on the right track to recovery as early as possible.”

Eight from MIT elected to American Academy of Arts and Sciences for 2023

Eight MIT faculty members are among more than 250 leaders from academia, the arts, industry, public policy, and research elected to the American Academy of Arts and Sciences, the academy announced April 19.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT in 2023 are:

  • Arnaud Costinot, professor of economics;
  • James J. DiCarlo, Peter de Florez Professor of Brain and Cognitive Sciences, director of the MIT Quest for Intelligence, and McGovern Institute Investigator;
  • Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor of Electrical Engineering and Computer Science;
  • Senthil Todadri, professor of physics;
  • Evelyn N. Wang, Ford Professor of Engineering (on leave) and director of the Department of Energy’s Advanced Research Projects Agency-Energy;
  • Boleslaw Wyslouch, professor of physics and director of the Laboratory for Nuclear Science and Bates Research and Engineering Center;
  • Yukiko Yamashita, professor of biology and core member of the Whitehead Institute; and
  • Wei Zhang, professor of mathematics.

“With the election of these members, the academy is honoring excellence, innovation, and leadership and recognizing a broad array of stellar accomplishments. We hope every new member celebrates this achievement and joins our work advancing the common good,” says David W. Oxtoby, president of the academy.

Since its founding in 1780, the academy has elected leading thinkers from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 250 Nobel and Pulitzer Prize winners.

Real-time feedback helps adolescents with depression quiet the mind

Real-time feedback about brain activity can help adolescents with depression or anxiety quiet their minds, according to a new study from MIT scientists. The researchers, led by McGovern research affiliate Susan Whitfield-Gabrieli, have used functional magnetic resonance imaging (fMRI) to show patients what’s happening in their brain as they practice mindfulness inside the scanner and to encourage them to focus on the present. They report in the journal Molecular Psychiatry that doing so settles down neural networks that are associated with symptoms of depression.

McGovern research affiliate Susan Whitfield-Gabrieli in the Martinos Imaging Center.

“We know this mindfulness meditation is really good for kids and teens, and we think this real-time fMRI neurofeedback is really a way to engage them and provide a visual representation of how they’re doing,” says Whitfield-Gabrieli. “And once we train people how to do mindfulness meditation, they can do it on their own at any time, wherever they are.”

The approach could be a valuable tool to alleviate or prevent depression in young people, which has been on the rise in recent years and escalated alarmingly during the Covid-19 pandemic. “This has gone from bad to catastrophic, in my perspective,” Whitfield-Gabrieli says. “We have to think out of the box and come up some really innovative ways to help.”

Default mode network

Mindfulness meditation, in which practitioners focus their awareness on the present moment, can modulate activity within the brain’s default mode network, which is so named because it is most active when a person is not focused on any particular task. Two hubs within the default mode network, the medial prefrontal cortex and the posterior cingulate cortex, are of particular interest to Whitfield-Gabrieli and her colleagues, due to a potential role in the symptoms of depression and anxiety.

“These two core hubs are very engaged when we’re thinking about the past or the future and we’re not really engaged in the present moment,” she explains. “If we’re in a healthy state of mind, we may be reminiscing about the past or planning for the future. But if we’re depressed, that reminiscing may turn into rumination or obsessively rehashing the past. If we’re particularly anxious, we may be obsessively worrying about the future.”

Whitfield-Gabrieli explains that these key hubs are often hyperconnected in people with anxiety and depression. The more tightly correlated the activity of the two regions are, the worse a person’s symptoms are likely to be. Mindfulness, she says, can help interrupt that hyperconnectivity.

“Mindfulness really helps to focus on the now, which just precludes all of this mind wandering and repetitive negative thinking,” she explains. In fact, she and her colleagues have found that mindfulness practice can reduce stress and improve attention in children. But she acknowledges that it can be difficult to engage young people and help them focus on the practice.

Tuning the mind

To help people visualize the benefits of their mindfulness practice, the researchers developed a game that can be played while an MRI scanner tracks a person’s brain activity. On a screen inside the scanner, the participant sees a ball and two circles. The circle at the top of the screen represents a desirable state in which the activity of the brain’s default mode network has been reduced, and the activity of a network the brain uses to focus on attention-demanding tasks—the frontal parietal network—has increased. An initial fMRI scan identifies these networks in each individual’s brain, creating a customized mental map on which the game is based.

“They’re training their brain to tune their mind. And they love it.” – Susan Whitfield-Gabrieli

As the person practices mindfulness meditation, which they learn prior to entering the scanner, the default mode network in the brain quiets while the frontal parietal mode activates. When the scanner detects this change, the ball moves and eventually enters its target. With an initial success, the target shrinks, encouraging even more focus. When the participant’s mind wanders from their task, the default mode network activation increases (relative to the frontal parietal network) and the ball moves down towards the second circle, which represents an undesirable state. “Basically, they’re just moving this ball with their brain,” Whitfield-Gabrieli says. “They’re training their brain to tune their mind. And they love it.”

Nine individuals between the ages of 17 and 19 with a history of major depression or anxiety disorders tried this new approach to mindfulness training, and for each of them, Whitfield-Gabrieli’s team saw a reduction in connectivity within the default mode network. Now they are working to determine whether an electroencephalogram, in which brain activity is measured with noninvasive electrodes, can be used to provide similar neurofeedback during mindfulness training—an approach that could be more accessible for broad clinical use.

Whitfield-Gabrieli notes that hyperconnectivity in the default mode network is also associated with psychosis, and she and her team have found that mindfulness meditation with real-time fMRI feedback can help reduce symptoms in adults with schizophrenia. Future studies are planned to investigate how the method impacts teens’ ability to establish a mindfulness practice and its potential effects on depression symptoms.