Nature: An unexpected source of innovative tools to study the brain

This story originally appeared in the Fall 2023 issue of BrainScan.

___

Scientist holds 3D printed phage over a natural background.
Genetic engineer Joseph Kreitz looks to the microscopic world for inspiration in Feng Zhang’s lab at the McGovern Institute. Photo: Steph Steve

In their quest to deepen their understanding of the brain, McGovern scientists take inspiration wherever it comes — and sometimes it comes from surprising sources. To develop new tools for research and innovative strategies for treating disease, they’ve drawn on proteins that organisms have been making for billions of years as well as sophisticated materials engineered for modern technology.

For McGovern investigator Feng Zhang, the natural world provides a rich source of molecules with remarkable and potentially useful functions.

Zhang is one of the pioneers of CRISPR, a programmable system for gene editing that is built from the components of a bacterial adaptive immune system. Scientists worldwide use CRISPR to modify genetic sequences in their labs, and many CRISPR-based therapies, which aim to treat disease through gene editing, are now in development. Meanwhile, Zhang and his team have continued to explore CRISPR-like systems beyond the bacteria in which they were originally discovered.

Turning to nature

This year, the search for evolutionarily related systems led Zhang’s team to a set of enzymes made by more complex organisms, including single-celled algae and hard-shell clams. Like the enzymes that power CRISPR, these newly discovered enzymes, called Fanzors, can be directed to cut DNA at specific sites by programming an RNA molecule as a guide.

Rhiannon Macrae, a scientific advisor in Zhang’s lab, says the discovery was surprising because Fanzors don’t seem to play the same role in immunity that CRISPR systems do. In fact, she says it’s not clear what Fanzors do at all. But as programmable gene editors, Fanzors might have an important advantage over current CRISPR tools — particularly for clinical applications. “Fanzor proteins are much smaller than the workhorse CRISPR tool, Cas9,” Macrae says. “This really matters when you actually want to be able to use one of these tools in a patient, because the bigger the tool, the harder it is to package and deliver to patients’ cells.”

Cryo-EM map of a Fanzor protein (gray, yellow, light blue, and pink) in complex with ωRNA (purple) and its target DNA (red). Non-target DNA strand in blue. Image: Zhang lab

Zhang’s team has thought a lot about how to get therapies to patients’ cells, and size is only one consideration. They’ve also been looking for ways to direct drugs, gene-editing tools, or other therapies to specific cells and tissues in the body. One of the lab’s leading strategies comes from another unexpected natural source: a microscopic syringe produced by certain insect-infecting bacteria.

In their search for an efficient system for targeted drug delivery, Zhang and graduate student Joseph Kreitz first considered the injection systems of bacteria-infecting viruses: needle-like structures that pierce the outer membrane of their host to deliver their own genetic material. But these viral injection systems can’t easily be freed from the rest of the virus.

Then Zhang learned that some bacteria have injection systems of their own, which they release inside their hosts after packing them with toxins. They reengineered the bacterial syringe, devising a delivery system that works on human cells. Their current system can be programmed to inject proteins — including those used for gene editing — directly into specified cell types. With further development, Zhang hopes it will work with other types of therapies, as well.

Magnetic imaging

In McGovern Associate Investigator Alan Jasanoff’s lab, researchers are designing sensors that can track the activity of specific neurons or molecules in the brain, using magnetic resonance imaging (MRI) or related forms of non-invasive imaging. These tools are essential for understanding how the brain’s cells and circuits work together to process information. “We want to give MRI a suite of metaphorical colors: sensitivities that enable us to dissect the different kinds of mechanistically significant contributors to neural activity,” he explains.

Jasanoff can tick off a list of molecules with notable roles in biology and industry that his lab has repurposed to glean more information from brain imaging. These include manganese — a metal once used to tint ancient glass; nitric oxide synthase — the enzyme that causes blushing; and iron oxide nanoparticles — tiny magnets that enable compact data storage inside computers. But Jasanoff says none of these should be considered out of place in the imaging world. “Most are pretty logical choices,” he says. “They all do different things and we use them in pretty different ways, but they are either magnetic or interact with magnetic molecules to serve our purposes for brain imaging.”

Close-up picture of manganese metal
Manganese, a metal that interacts weakly with magnetic fields, is a key component in new MRI sensors being developed in Alan Jasanoff’s lab at the McGovern Institute.

The enzyme nitric oxide synthase, for example, plays an important role in most functional MRI scans. The enzyme produces nitric oxide, which causes blood vessels to expand. This can bring a blush to the cheeks, but in the brain, it increases blood flow to bring more oxygen to busy neurons. MRI can detect this change because it is sensitive to the magnetic properties of blood.

By using blood flow as a proxy for neural activity, functional MRI scans light up active regions of the brain, but they can’t pinpoint the activity of specific cells. So Jasanoff and his team devised a more informative MRI sensor by reengineering nitric oxide synthase. Their modified enzyme, which they call NOSTIC, can be introduced into a select group of cells, where it will produce nitric oxide in response to neural activity — triggering increased blood flow and strengthening the local MRI signal. Researchers can deliver it to specific kinds of brain cells, or they can deliver it exclusively to neurons that communicate directly with one another. Then they can watch for an elevated MRI signal when those cells fire. This lets them see how information flows through the brain and tie specific cells to particular tasks.

Miranda Dawson, a graduate student in Jasanoff’s lab, is using NOSTIC to study the brain circuits that fuel addiction. She’s interested in the involvement of a brain region called the insula, which may mediate the physical sensations that people with addiction experience during drug cravings or withdrawal. With NOSTIC, Dawson can follow how the insula communicates to other parts of the brain as a rat experiences these MITstages of addiction. “We give our sensor to the insula, and then it projects to anatomically connected brain regions,” she explains. “So we’re able to delineate what circuits are being activated at different points in the addiction cycle.”

Scientist with folded arms next to a picture of a brain
Miranda Dawson uses her lab’s novel MRI sensor, NOSTIC, to illuminate the brain circuits involved in fentanyl craving and withdrawal. Photo: Steph Stevens; MRI scan: Nan Li, Souparno Ghosh, Jasanoff lab

Mining biodiversity

McGovern investigators know that good ideas and useful tools can come from anywhere. Sometimes, the key to harnessing those tools is simply recognizing their potential. But there are also opportunities for a more deliberate approach to finding them.

McGovern Investigator Ed Boyden is leading a program that aims to accelerate the discovery of valuable natural products. Called the Biodiversity Network (BioNet), the project is collecting biospecimens from around the world and systematically analyzing them, looking for molecular tools that could be applied to major challenges in science and medicine, from brain research to organ preservation. “The idea behind BioNet,” Boyden explains, “is rather than wait for chance to give us these discoveries, can we go look for them on purpose?”

Making invisible therapy targets visible

The lab of Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, has developed a powerful technology called Expansion Revealing (ExR) that makes visible molecular structures that were previously too hidden to be seen with even the most powerful microscopes. It “reveals” the nanoscale alterations in synapses, neural wiring, and other molecular assemblies using ordinary lab microscopes. It does so this way: Inside a cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves between the molecules. ExR “de-crowds” the molecules by expanding the cell using a chemical process, making the molecules accessible to fluorescent tags.

Jinyoung Kang is a J. Douglas Tan Postdoctoral Fellow in the Boyden and Feng labs. Photo: Steph Stevens

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” says Jinyoung Kang, a J. Douglas Tan Postdoctoral Fellow in the labs of Boyden and Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences. “Until now, there has been no tool to visualize synapses very well at nanoscale.”

Over the past year, the Boyden team has been using ExR to explore the underlying mechanisms of brain disorders, including autism spectrum disorder (ASD) and Alzheimer’s disease. Since the method can be applied iteratively, Boyden imagines it may one day succeed in creating a 100-fold magnification of molecular structures.

“Using earlier technology, researchers may be missing entire categories of molecular phenomena, both functional and dysfunctional,” says Boyden. “It’s critical to bring these nanostructures into view so that we can identify potential targets for new therapeutics that can restore functional molecular arrangements.”

The team is applying ExR to the study of mutant-animal-model brain slices to expose complex synapse 3D nanoarchitecture and configuration. Among their questions: How do synapses differ when mutations that cause autism and other neurological conditions are present?

Using the new technology, Kang and her collaborator Menglong Zeng characterized the molecular architecture of excitatory synapses on parvalbumin interneurons, cells that drastically influence the downstream effects of neuronal signaling and ultimately change cognitive behaviors. They discovered condensed AMPAR clustering in parvalbumin interneurons is essential for normal brain function. The next step is to explore their role in the function of parvalbumin interneurons, which are vulnerable to stressors and have been implicated in brain disorders including autism and Alzheimer’s disease.

The researchers are now investigating whether ExR can reveal abnormal protein nanostructures in SHANK3 knockout mice and marmosets. Mutations in the SHANK3 gene lead to one of the most severe types of ASD, Phelan-McDermid syndrome, which accounts for about 2 percent of all ASD patients with intellectual disability.

Researchers uncover new CRISPR-like system in animals that can edit the human genome

A team of researchers led by Feng Zhang at the McGovern Institute and the Broad Institute of MIT and Harvard has uncovered the first programmable RNA-guided system in eukaryotes — organisms that include fungi, plants, and animals.

In a study in Nature, the team describes how the system is based on a protein called Fanzor. They showed that Fanzor proteins use RNA as a guide to target DNA precisely, and that Fanzors can be reprogrammed to edit the genome of human cells. The compact Fanzor systems have the potential to be more easily delivered to cells and tissues as therapeutics than CRISPR/Cas systems, and further refinements to improve their targeting efficiency could make them a valuable new technology for human genome editing.

CRISPR/Cas was first discovered in prokaryotes (bacteria and other single-cell organisms that lack nuclei) and scientists including Zhang’s lab have long wondered whether similar systems exist in eukaryotes. The new study demonstrates that RNA-guided DNA-cutting mechanisms are present across all kingdoms of life.

“This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.” — Feng Zhang

“CRISPR-based systems are widely used and powerful because they can be easily reprogrammed to target different sites in the genome,” said Zhang, senior author on the study and a core institute member at the Broad, an investigator at MIT’s McGovern Institute, the James and Patricia Poitras Professor of Neuroscience at MIT, and a Howard Hughes Medical Institute investigator. “This new system is another way to make precise changes in human cells, complementing the genome editing tools we already have.”

Searching the domains of life

A major aim of the Zhang lab is to develop genetic medicines using systems that can modulate human cells by targeting specific genes and processes. “A number of years ago, we started to ask, ‘What is there beyond CRISPR, and are there other RNA-programmable systems out there in nature?’” said Zhang.

Feng Zhang with folded arms in lab
McGovern Investigator Feng Zhang in his lab.

Two years ago, Zhang lab members discovered a class of RNA-programmable systems in prokaryotes called OMEGAs, which are often linked with transposable elements, or “jumping genes”, in bacterial genomes and likely gave rise to CRISPR/Cas systems. That work also highlighted similarities between prokaryotic OMEGA systems and Fanzor proteins in eukaryotes, suggesting that the Fanzor enzymes might also use an RNA-guided mechanism to target and cut DNA.

In the new study, the researchers continued their study of RNA-guided systems by isolating Fanzors from fungi, algae, and amoeba species, in addition to a clam known as the Northern Quahog. Co-first author Makoto Saito of the Zhang lab led the biochemical characterization of the Fanzor proteins, showing that they are DNA-cutting endonuclease enzymes that use nearby non-coding RNAs known as ωRNAs to target particular sites in the genome. It is the first time this mechanism has been found in eukaryotes, such as animals.

Unlike CRISPR proteins, Fanzor enzymes are encoded in the eukaryotic genome within transposable elements and the team’s phylogenetic analysis suggests that the Fanzor genes have migrated from bacteria to eukaryotes through so-called horizontal gene transfer.

“These OMEGA systems are more ancestral to CRISPR and they are among the most abundant proteins on the planet, so it makes sense that they have been able to hop back and forth between prokaryotes and eukaryotes,” said Saito.

To explore Fanzor’s potential as a genome editing tool, the researchers demonstrated that it can generate insertions and deletions at targeted genome sites within human cells. The researchers found the Fanzor system to initially be less efficient at snipping DNA than CRISPR/Cas systems, but by systematic engineering, they introduced a combination of mutations into the protein that increased its activity 10-fold. Additionally, unlike some CRISPR systems and the OMEGA protein TnpB, the team found that a fungal-derived Fanzor protein did not exhibit “collateral activity,” where an RNA-guided enzyme cleaves its DNA target as well as degrading nearby DNA or RNA. The results suggest that Fanzors could potentially be developed as efficient genome editors.

Co-first author Peiyu Xu led an effort to analyze the molecular structure of the Fanzor/ωRNA complex and illustrate how it latches onto DNA to cut it. Fanzor shares structural similarities with its prokaryotic counterpart CRISPR-Cas12 protein, but the interaction between the ωRNA and the catalytic domains of Fanzor is more extensive, suggesting that the ωRNA might play a role in the catalytic reactions. “We are excited about these structural insights for helping us further engineer and optimize Fanzor for improved efficiency and precision as a genome editor,” said Xu.

Like CRISPR-based systems, the Fanzor system can be easily reprogrammed to target specific genome sites, and Zhang said it could one day be developed into a powerful new genome editing technology for research and therapeutic applications. The abundance of RNA-guided endonucleases like Fanzors further expands the number of OMEGA systems known across kingdoms of life and suggests that there are more yet to be found.

“Nature is amazing. There’s so much diversity,” said Zhang. “There are probably more RNA-programmable systems out there, and we’re continuing to explore and will hopefully discover more.”

The paper’s other authors include Guilhem Faure, Samantha Maguire, Soumya Kannan, Han Altae-Tran, Sam Vo, AnAn Desimone, and Rhiannon Macrae.

Support for this work was provided by the Howard Hughes Medical Institute; Poitras Center for Psychiatric Disorders Research at MIT; K. Lisa Yang and Hock E. Tan Molecular Therapeutics Center at MIT; Broad Institute Programmable Therapeutics Gift Donors; The Pershing Square Foundation, William Ackman, and Neri Oxman; James and Patricia Poitras; BT Charitable Foundation; Asness Family Foundation; Kenneth C. Griffin; the Phillips family; David Cheng; Robert Metcalfe; and Hugo Shong.

 

Magnetic robots walk, crawl, and swim

MIT scientists have developed tiny, soft-bodied robots that can be controlled with a weak magnet. The robots, formed from rubbery magnetic spirals, can be programmed to walk, crawl, swim—all in response to a simple, easy-to-apply magnetic field.

“This is the first time this has been done, to be able to control three-dimensional locomotion of robots with a one-dimensional magnetic field,” says McGovern associate investigator Polina Anikeeva, whose team reported on the magnetic robots June 3, 2023, in the journal Advanced Materials. “And because they are predominantly composed of polymer and polymers are soft, you don’t need a very large magnetic field to activate them. It’s actually a really tiny magnetic field that drives these robots,” says Anikeeva, who is also the Matoula S. Salapatas Professor in Materials Science and Engineering and a professor of brain and cognitive sciences at MIT, as well as the associate director of MIT’s Research Laboratory of Electronics and director of MIT’s K. Lisa Yang Brain-Body Center.

Portait of MIT scientist Polina Anikeeva
McGovern Institute Associate Investigator Polina Anikeeva in her lab. Photo: Steph Stevens

The new robots are well suited to transport cargo through confined spaces and their rubber bodies are gentle on fragile environments, opening the possibility that the technology could be developed for biomedical applications. Anikeeva and her team have made their robots millimeters long, but she says the same approach could be used to produce much smaller robots.

Engineering magnetic robots

Anikeeva says that until now, magnetic robots have moved in response to moving magnetic fields. She explains that for these models, “if you want your robot to walk, your magnet walks with it. If you want it to rotate, you rotate your magnet.” That limits the settings in which such robots might be deployed. “If you are trying to operate in a really constrained environment, a moving magnet may not be the safest solution. You want to be able to have a stationary instrument that just applies magnetic field to the whole sample,” she explains.

Youngbin Lee, a former graduate student in Anikeeva’s lab, engineered a solution to this problem. The robots he developed in Anikeeva’s lab are not uniformly magnetized. Instead, they are strategically magnetized in different zones and directions so a single magnetic field can enable a movement-driving profile of magnetic forces.

Before they are magnetized, however, the flexible, lightweight bodies of the robots must be fabricated. Lee starts this process with two kinds of rubber, each with a different stiffness. These are sandwiched together, then heated and stretched into a long, thin fiber. Because of the two materials’ different properties, one of the rubbers retains its elasticity through this stretching process, but the other deforms and cannot return to its original size. So when the strain is released, one layer of the fiber contracts, tugging on the other side and pulling the whole thing into a tight coil. Anikeeva says the helical fiber is modeled after the twisty tendrils of a cucumber plant, which spiral when one layer of cells loses water and contracts faster than a second layer.

A third material—one whose particles have the potential to become magnetic—is incorporated in a channel that runs through the rubbery fiber. So once the spiral has been made, a magnetization pattern that enables a particular type of movement can be introduced.

“Youngbin thought very carefully about how to magnetize our robots to make them able to move just as he programmed them to move,” Anikeeva says. “He made calculations to determine how to establish such a profile of forces on it when we apply a magnetic field that it will actually start walking or crawling.”

To form a caterpillar-like crawling robot, for example, the helical fiber is shaped into gentle undulations, and then the body, head, and tail are magnetized so that a magnetic field applied perpendicular to the robot’s plane of motion will cause the body to compress. When the field is reduced to zero, the compression is released, and the crawling robot stretches. Together, these movements propel the robot forward. Another robot in which two foot-like helical fibers are connected with a joint is magnetized in a pattern that enables a movement more like walking.

Biomedical potential

This precise magnetization process generates a program for each robot and ensures that that once the robots are made, they are simple to control. A weak magnetic field activates each robot’s program and drives its particular type of movement. A single magnetic field can even send multiple robots moving in opposite directions, if they have been programmed to do so. The team found that one minor manipulation of the magnetic field has a useful effect: With the flip of a switch to reverse the field, a cargo-carrying robot can be made to gently shake and release its payload.

Anikeeva says she can imagine these soft-bodied robots—whose straightforward production will be easy to scale up—delivering materials through narrow pipes or even inside the human body. For example, they might carry a drug through narrow blood vessels, releasing it exactly where it is needed. She says the magnetically-actuated devices have biomedical potential beyond robots as well, and might one day be incorporated into artificial muscles or materials that support tissue regeneration.

Refining mental health diagnoses

Maedbh King came to MIT to make a difference in mental health. As a postdoctoral fellow in the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center, she is building computer models aimed at helping clinicians improve diagnosis and treatment, especially for young people with neurodevelopmental and psychiatric disorders.

Tapping two large patient-data sources, King is working to analyze critical biological and behavioral information to better categorize patients’ mental health conditions, including autism spectrum disorder, attention-deficit hyperactivity disorder (ADHD), anxiety, and suicidal thoughts — and to provide more predictive approaches to addressing them. Her strategy reflects the center’s commitment to a holistic understanding of human brain function using theoretical and computa-tional neuroscience.

“Today, treatment decisions for psychiatric disorders are derived entirely from symptoms, which leaves clinicians and patients trying one treatment and, if it doesn’t work, trying another,” says King. “I hope to help change that.”

King grew up in Dublin, Ireland, and studied psychology in college; gained neuroimaging and programming skills while earning a master’s degree from Western University in Canada; and received her doctorate from the University of California, Berkeley, where she built maps and models of the human brain. In fall 2022, King joined the lab of Satrajit Ghosh, a McGovern Institute principal research scientist whose team uses neuroimaging, speech communication, and machine learning to improve assessments and treatments for mental health and neurological disorders.

Big-data insights

King is pursuing several projects using the Healthy Brain Network, a landmark mental health study of children and adolescents in New York City. She and lab colleagues are extracting data from cognitive and other assessments — such as language patterns, favorite school subjects, and family mental illness history — from roughly 4,000 participants to provide more-nuanced understanding of their neurodevelopmental disorders, such as autism or ADHD.

“Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records,” says King.

With this database, one can develop “very rich clinical profiles of these young people,” including their challenges and adaptive strengths, King explains. “We’re interested in placing these participants within a spectrum of symptoms, rather than just providing a binary label of, ‘has this disorder’ or ‘doesn’t have it.’ It’s an effort to subtype based on these phenotypic assessments.”

In other research, King is developing tools to detect risk factors for suicide among adolescents. Working with psychiatrists at Children’s Hospital of Philadelphia, she is using detailed questionnaires from some 20,000 youths who visited the hospital’s emergency department over several years; about one-tenth had tried to take their own lives. The questionnaires collect information about demographics, lifestyle, relationships, and other aspects of patients’ lives.

“One of the big questions the physicians want to answer is, Are there any risk predictors we can identify that can ultimately prevent, or at least mitigate, future suicide attempts?” King says. “Computational models are powerful. They can identify patterns that can’t be obtained with the human eye through electronic records.”

King is passionate about producing findings to help practitioners, whether they’re clinicians, teachers, parents, or policy makers, and the populations they’re studying. “This applied work,” she says, “should be communicated in a way that can be useful.

When computer vision works more like a brain, it sees more like people do

From cameras to self-driving cars, many of today’s technologies depend on artificial intelligence (AI) to extract meaning from visual information.  Today’s AI technology has artificial neural networks at its core, and most of the time we can trust these AI computer vision systems to see things the way we do — but sometimes they falter. According to MIT and IBM Research scientists, one way to improve computer vision is to instruct the artificial neural networks that they rely on to deliberately mimic the way the brain’s biological neural network processes visual images.

Researchers led by James DiCarlo, the director of MIT’s Quest for Intelligence and member of the MIT-IBM Watson AI Lab, have made a computer vision model more robust by training it to work like a part of the brain that humans and other primates rely on for object recognition. This May, at the International Conference on Learning Representations (ICLR), the team reported that when they trained an artificial neural network using neural activity patterns in the brain’s inferior temporal (IT) cortex, the artificial neural network was more robustly able to identify objects in images than a model that lacked that neural training. And the model’s interpretations of images more closely matched what humans saw, even when images included minor distortions that made the task more difficult.

Comparing neural circuits

Portrait of Professor DiCarlo
McGovern Investigator and Director of MIT Quest for Intelligence, James DiCarlo. Photo: Justin Knight

Many of the artificial neural networks used for computer vision already resemble the multi-layered brain circuits that process visual information in humans and other primates. Like the brain, they use neuron-like units that work together to process information. As they are trained for a particular task, these layered components collectively and progressively process the visual information to complete the task — determining for example, that an image depicts a bear or a car or a tree.

DiCarlo and others previously found that when such deep-learning computer vision systems establish efficient ways to solve visual problems, they end up with artificial circuits that work similarly to the neural circuits that process visual information in our own brains. That is, they turn out to be surprisingly good scientific models of the neural mechanisms underlying primate and human vision.

That resemblance is helping neuroscientists deepen their understanding of the brain. By demonstrating ways visual information can be processed to make sense of images, computational models suggest hypotheses about how the brain might accomplish the same task. As developers continue to refine computer vision models, neuroscientists have found new ideas to explore in their own work.

“As vision systems get better at performing in the real world, some of them turn out to be more human-like in their internal processing. That’s useful from an understanding biology point of view,” says DiCarlo, who is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute.

Engineering more brain-like AI

While their potential is promising, computer vision systems are not yet perfect models of human vision. DiCarlo suspected one way to improve computer vision may be to incorporate specific brain-like features into these models.

To test this idea, he and his collaborators built a computer vision model using neural data previously collected from vision-processing neurons in the monkey IT cortex — a key part of the primate ventral visual pathway involved in the recognition of objects — while the animals viewed various images. More specifically, Joel Dapello, a Harvard graduate student and former MIT-IBM Watson AI Lab intern, and Kohitij Kar, Assistant Professor, Canada Research Chair (Visual Neuroscience) at York University and visiting scientist at MIT, in collaboration with David Cox, IBM Research’s VP for AI Models and IBM director of the MIT-IBM Watson AI Lab, and other researchers at IBM Research and MIT, asked an artificial neural network to emulate the behavior of these primate vision-processing neurons while the network learned to identify objects in a standard computer vision task.

“In effect, we said to the network, ‘please solve this standard computer vision task, but please also make the function of one of your inside simulated “neural” layers be as similar as possible to the function of the corresponding biological neural layer,’” DiCarlo explains. “We asked it to do both of those things as best it could.” This forced the artificial neural circuits to find a different way to process visual information than the standard, computer vision approach, he says.

After training the artificial model with biological data, DiCarlo’s team compared its activity to a similarly-sized neural network model trained without neural data, using the standard approach for computer vision. They found that the new, biologically-informed model IT layer was – as instructed — a better match for IT neural data.  That is, for every image tested, the population of artificial IT neurons in the model responded more similarly to the corresponding population of biological IT neurons.

“Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says.

The researchers also found that the model IT was also a better match to IT neural data collected from another monkey, even though the model had never seen data from that animal, and even when that comparison was evaluated on that monkey’s IT responses to new images. This indicated that the team’s new, “neurally-aligned” computer model may be an improved model of the neurobiological function of the primate IT cortex — an interesting finding, given that it was previously unknown whether the amount of neural data that can be currently collected from the primate visual system is capable of directly guiding model development.

With their new computer model in hand, the team asked whether the “IT neural alignment” procedure also leads to any changes in the overall behavioral performance of the model. Indeed, they found that the neurally-aligned model was more human-like in its behavior — it tended to succeed in correctly categorizing objects in images for which humans also succeed, and it tended to fail when humans also fail.

Adversarial attacks

The team also found that the neurally-aligned model was more resistant to “adversarial attacks” that developers use to test computer vision and AI systems.  In computer vision, adversarial attacks introduce small distortions into images that are meant to mislead an artificial neural network.

“Say that you have an image that the model identifies as a cat. Because you have the knowledge of the internal workings of the model, you can then design very small changes in the image so that the model suddenly thinks it’s no longer a cat,” DiCarlo explains.

These minor distortions don’t typically fool humans, but computer vision models struggle with these alterations. A person who looks at the subtly distorted cat still reliably and robustly reports that it’s a cat. But standard computer vision models are more likely to mistake the cat for a dog, or even a tree.

“There must be some internal differences in the way our brains process images that lead to our vision being more resistant to those kinds of attacks,” DiCarlo says. And indeed, the team found that when they made their model more neurally-aligned, it became more robust, correctly identifying more images in the face of adversarial attacks.  The model could still be fooled by stronger “attacks,” but so can people, DiCarlo says. His team is now exploring the limits of adversarial robustness in humans.

A few years ago, DiCarlo’s team found they could also improve a model’s resistance to adversarial attacks by designing the first layer of the artificial network to emulate the early visual processing layer in the brain. One key next step is to combine such approaches — making new models that are simultaneously neurally-aligned at multiple visual processing layers.

The new work is further evidence that an exchange of ideas between neuroscience and computer science can drive progress in both fields. “Everybody gets something out of the exciting virtuous cycle between natural/biological intelligence and artificial intelligence,” DiCarlo says. “In this case, computer vision and AI researchers get new ways to achieve robustness and neuroscientists and cognitive scientists get more accurate mechanistic models of human vision.”

This work was supported by the MIT-IBM Watson AI Lab, Semiconductor Research Corporation, DARPA, the Massachusetts Institute of Technology Shoemaker Fellowship, Office of Naval Research, the Simons Foundation, and Canada Research Chair Program.

PhD student Wei-Chen Wang is moved to help people heal

This story originally appeared in the Spring 2023 issue of Spectrum.

___

When he turned his ankle five years ago as an undergraduate playing pickup basketball at the University of Illinois, Wei-Chen (Eric) Wang SM ’22 knew his life would change in certain ways. For one thing, Wang, then a computer science major, wouldn’t be playing basketball anytime soon. He also assumed, correctly, that he might require physical therapy (PT).

What he did not foresee was that this minor injury would influence his career trajectory. While lying on the PT bench, Wang began to wonder: “Can I replicate what the therapist is doing using a robot?” It was an idle thought at the time. Today, however, his research involves robots and movement, closely related to what had seemed a passing fancy.

Wang continued his focus on computer science as an MIT graduate student, receiving his master’s in 2022 before deciding to pursue work of a more applied nature. He met Nidhi Seethapathi, who had joined MIT’s faculty as an assistant professor in electrical engineering and computer science and brain and cognitive science a few months earlier, and was intrigued by the notion of creating robots that could illuminate the key principles of movement—knowledge that might someday help people regain the ability to move comfortably after suffering from injury, stroke, or disease.

As the first PhD student in Seethapathi’s group and a MathWorks Fellow, Wang is charged with building machine learning-based models that can accurately predict and reproduce human movements. He will then use computer-simulated environments to visualize and evaluate the performance of these models.

To begin, he needs to gather data about specific human movements. One potential data collection method involves the placement of sensors or markers on different parts of the body to pinpoint their precise positions at any given moment. He can then try to calculate those positions in the future, as dictated by the equations of motion in physics.

The other method relies on computer vision-powered software that can automatically convert video footage to motion data. Wang prefers the latter approach, which he considers more natural. “We just look at what humans are doing and try to learn from that directly,” he explains. That’s also where machine learning comes in. “We use machine-learning tools to extract data from the video, and those data become the input to our model,” he adds. The model, in this case, is just another term for the robot brain.

The near-term goal is not to make robots more natural, Wang notes. “We’re using [simulated] robots to understand how humans are moving and eventually to explain any kind of movement—or at least that’s the hope. That said, based on the general principles we’re able to abstract, we might someday build robots that can move more naturally.”

Wang is also collaborating on a project headed by postdoctoral fellow Antoine De Comité that focuses on robotic retrieval of objects—the movements required to remove books from a library shelf, for example, or to grab a drink from a refrigerator. While robots routinely excel at tasks such as grasping an object on a tabletop, performing naturalistic movements in three dimensions remains challenging.

Wang describes a video shown by a Stanford University scientist in which a robot destroyed a refrigerator while attempting to extract a beer. He and De Comité hope for better results with robots that have undergone reinforcement learning—an approach using deep learning in which desired motions are rewarded or reinforced whereas unwanted motions are discouraged.

If they succeed in designing a robot that can safely retrieve a beer, Wang says, then more important and delicate tasks could be within reach. Someday, a robot at PT might guide a patient through knee exercises or apply ultrasound to an arthritic elbow.

Francesca Riccio-Ackerman works to improve access to prosthetics

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In Sierra Leone, war and illness have left up to 40,000 people requiring orthotics and prosthetics services, but there is a profound lack of access to specialized care, says Francesca Riccio-Ackerman, a biomedical engineer and PhD student studying health equity and health systems. There is just one fully certified prosthetist available for the thousands of patients in the African nation who are living with amputation, she notes. The ideal number is one for every 250, according to the World Health Organization and the International Society of Orthotics and Prosthetics.

The data point is significant for Riccio-Ackerman, who conducts research in the MIT Media Lab’s Biomechatronics Group and in the K. Lisa Yang Center for Bionics, both of which aim to improve translation of assistive technologies to people with disabilities. “We’re really focused on improving and augmenting human mobility,” she says. For Riccio-Ackerman, part of the quest to improve human mobility means ensuring that the people who need access to prosthetic care can get it—for the duration of their lives.

“We’re really focused on improving and augmenting human mobility,” says Riccio-Ackerman.

In September 2021, the Yang Center provided funding for Riccio-Ackerman to travel to Sierra Leone, where she witnessed the lingering physical effects of a brutal decade-long civil war that ended in 2002. Prosthetic and orthotic care in the country, where a vast number of patients are also disabled by untreated polio or diabetes, has become more elusive, she says, as global media attention on the war’s aftermath has subsided. “People with amputation need low-level, consistent care for years. There really needs to be a long-term investment in improving this.”

Through the Yang Center and supported by a fellowship from the new MIT Morningside Academy for Design, Riccio-Ackerman is designing and building a sustainable care and delivery model in Sierra Leone that aims to multiply the production of prosthetic limbs and strengthen the country’s prosthetic sector. “[We’re working] to improve access to orthotic and prosthetic services,” she says.

She is also helping to establish a supply chain for prosthetic limb and orthotic brace parts and equipping clinics with machines and infrastructure to serve more patients. In January 2023, her team launched a four-year collaboration with the Sierra Leone Ministry of Health and Sanitation. One of the goals of the joint effort is to enable Sierra Leoneans to obtain professional prosthetics training, so they can care for their own community without leaving home.

From engineering to economics

Riccio-Ackerman was drawn to issues around human mobility after witnessing her aunt suffer from rheumatoid arthritis. “My aunt was young, but she looked like she was 80 or 90. She was sick, in pain, in a wheelchair— a young spirit in an old body,” she says.

As a biomedical engineering undergraduate student at Florida International University, Riccio-Ackerman worked on clinical trials for neural-enabled myoelectric arms controlled by nerves in the body. She says that the technology was thrilling yet heartbreaking. She would often have to explain to patients who participated in testing that they couldn’t take the devices home and that they may never be covered by insurance.

Riccio-Ackerman began asking questions: “What factors determine who gets an amputation? Why are we making devices that are so expensive and inaccessible?” This sense of injustice inspired her to pivot away from device design and toward a master’s degree in health economics and policy at the SDA Bocconi School of Management in Milan.

She began work as a research specialist with Hugh Herr SM ’93, professor of arts and sciences at the MIT Media Lab and codirector of the Yang Center, helping to study communities that were medically neglected in prosthetic care. “I knew that the devices weren’t getting to the people who need them, and I didn’t know if the best way to solve it was through engineering,” Riccio-Ackerman explains.

While Riccio-Ackerman’s PhD should be finished within three years, she’s only at the beginning of her health care equity work. “We’re forging ahead in Sierra Leone and thinking about translating our strategy and methodologies to other communities around the globe that could benefit,” she says. “We hope to be able to do this in many, many countries in the future.”

Bionics researchers develop technologies to ease pain and transcend human limitations

This story originally appeared in the Spring 2023 issue of Spectrum.

___

In early December 2022, a middle-aged woman from California arrived at Boston’s Brigham and Women’s Hospital for the amputation of her right leg below the knee following an accident. This was no ordinary procedure. At the end of her remaining leg, surgeons attached a titanium fixture through which they threaded eight thin, electrically conductive wires. These flexible leads, implanted on her leg muscles, would, in the coming months, connect to a robotic, battery-powered prosthetic ankle and foot.

The goal of this unprecedented surgery, driven by MIT researchers from the K. Lisa Yang Center for Bionics at MIT, was the restoration of near-natural function to the patient, enabling her to sense and control the position and motion of her ankle and foot—even with her eyes closed.

In the K. Lisa Yang Center for Bionics, codirector Hugh Herr SM ’93 and graduate student Christopher Shallal are working to return mobility to people disabled by disease or physical trauma. Photo: Tony Luong

“The brain knows exactly how to control the limb, and it doesn’t matter whether it is flesh and bone or made of titanium, silicon, and carbon composite,” says Hugh Herr SM ’93, professor of media arts and sciences, head of the MIT Media Lab’s Biomechatronics Group, codirector of the Yang Center, and an associate member of MIT’s McGovern Institute for Brain Research.

For Herr, in attendance during that long day, the surgery represented a critical milestone in a decades-long mission to develop technologies returning mobility to people disabled by disease or physical trauma. His research combines a dizzying range of disciplines—electrical, mechanical, tissue, and biomedical engineering, as well as neuroscience and robotics—and has yielded pathbreaking results. Herr’s more than 100 patents include a computer-controlled knee and powered ankle-foot prosthesis and have enabled thousands of people around the world to live more on their own terms, including Herr.

Surmounting catastrophe

For much of Herr’s life, “go” meant “up.”

“Starting when I was eight, I developed an extraordinary passion, an absolute obsession, for climbing; it’s all I thought about in life,” says Herr. He aspired “to be the best climber in the world,” a goal he nearly achieved in his teenage years, enthralled by the “purity” of ascending mountains ropeless and solo in record times, by “a vertical dance, a balance between physicality and mind control.”

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

At 17, Herr became disoriented while climbing New Hampshire’s Mt. Washington during a blizzard. Days in the cold permanently damaged his legs, which had to be amputated below his knees. His rescue cost another man’s life, and Herr was despondent, disappointed in himself, and fearful for his future.

Then, following months of rehabilitation, he felt compelled to test himself. His first weekend home, when he couldn’t walk without canes and crutches, he headed back to the mountains. “I hobbled to the base of this vertical cliff and started ascending,” he recalls. “It brought me joy to realize that I was still me, the same person.”

But he also recognized that as a person with amputated limbs, he faced severe disadvantages. “Society doesn’t look kindly on people with unusual bodies; we are viewed as crippled and weak, and that did not sit well with me.” Unable to tolerate both the new physical and social constraints on his life, Herr determined to view his disability not as a loss but as an opportunity. “I think the rage was the catapult that led me to do something that was without precedent,” he says.

Lifelike limb

On hand in the surgical theater in December was a member of Herr’s Biomechatronics Group for whom the bionic limb procedure also held special resonance. Christopher Shallal, a second-year graduate student in the Harvard-MIT Health Sciences and Technology program who received bilateral lower limb amputations at birth, worked alongside surgeon Matthew Carty testing the electric leads before implantation in the patient. Shallal found this, his first direct involvement with a reconstruction surgery, deeply fulfilling.

“Ever since I was a kid, I’ve wanted to do medicine plus engineering,” says Shallal. “I’m really excited to work on this bionic limb reconstruction, which will probably be one of the most advanced systems yet in terms of neural interfacing and control, with a far greater range of motion possible.”

Hugh and Shallal are working on a next-generation, biomimetic limb with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb. Photo: Tony Luong

Like other Herr lab designs, the new prosthesis features onboard, battery-powered propulsion, microprocessors, and tunable actuators. But this next-generation, biomimetic limb represents a major leap forward, replacing electrodes sited on a patient’s skin, subject to sweat and other environmental threats, with implanted sensors that can relay signals between the external prosthesis and muscles in the remaining limb.

This system takes advantage of a breakthrough technique invented several years ago by the Herr lab called CMI (for cutaneous mechanoneural interface), which constructs muscle-skin-nerve bundles at the amputation site. Muscle actuators controlled by computers on board the external prosthesis apply forces on skin cells implanted within the amputated residuum when a person with amputation touches an object with their prosthesis.

With CMI and electric leads connecting the prosthesis to these muscle actuators within the residual limb, the researchers hypothesize that a person with an amputation will be able to “feel” their prosthetic leg step onto the ground. This sensory capability is the holy grail for persons with major limb loss. After recovery from her surgery, the woman from California will be wearing Herr’s latest state-of-the-art prosthetic system in the lab.

‘Tinkering’ with the body

Not all artificial limbs emulate those that humans are born with. “You can make them however you want, swapping them in and out depending on what you want to do, and they can take you anywhere,” Herr says. Committed to extreme climbing even after his accident, Herr came up with special limbs that became a commercial hit early in his career. His designs made it possible for someone with amputated legs to run and dance.

But he also knew the day-to-day discomfort of navigating on flatter earth with most prostheses. He won his first patent during his senior year of college for a fluid-controlled socket attachment designed to reduce the pain of walking. Growing up in a Mennonite family skilled in handcrafting things they needed, and in a larger community that was disdainful of technology, Herr says he had “difficulty trusting machines.” Yet by the time he began his master’s program at MIT, intent on liberating persons with limb amputation to live more fully in the world, he had embraced the tools of science and engineering as the means to this end.

“I want to be in the business of designing not more and more powerful tools but designing new bodies,” says Hugh Herr.

For Shallal, Herr was an early icon, and his inventions and climbing exploits served as inspiration. “I’d known about Hugh since middle school; he was famous among those with amputations,” he says. “As a kid, I liked tinkering with things, and I kind of saw my body as a canvas, a place where I could explore different boundaries and expand possibilities for myself and others with amputations.” In school, Shallal sometimes encountered resistance to his prostheses. “People would say I couldn’t do certain things, like running and playing different sports, and I found these barriers frustrating,” he says. “I did things in my own way and didn’t want people to pity me.”

In fact, Shallal felt he could do some things better than his peers. In high school, he used a 3-D printer to make a mobile phone charger case he could plug into his prosthesis. “As a kid, I would wear long pants to hide my legs, but as the technology got cooler, I started wearing shorts,” he says. “I got comfortable and liked kind of showing off my legs.”

Global impact

December’s surgery was the first phase in the bionic limb project. Shallal will be following up with the patient over many months, ensuring that the connections between her limb and implanted sensors function and provide appropriate sensorimotor data for the built-in processor. Research on this and other patients to determine the impact of these limbs on gait and ease of managing slopes, for instance, will form the basis for Shallal’s dissertation.

“After graduation, I’d be really interested in translating technology out of the lab, maybe doing a startup related to neural interfacing technology,” he says. “I watched Inspector Gadget on television when I was a kid. Making the tool you need at the time you need it to fix problems would be my dream.”

Herr will be overseeing Shallal’s work, as well as a suite of research efforts propelled by other graduate students, postdocs, and research scientists that together promise to strengthen the technology behind this generation of biomimetic prostheses.

One example: devising an innovative method for measuring muscle length and velocity with tiny implanted magnets. In work published in November 2022, researchers including Herr; project lead Cameron Taylor SM ’16, PhD ’20, a research associate in the Biomechatronics Group; and Brown University partners demonstrated that this new tool, magnetomicrometry, yields the kind of high-resolution data necessary for even more precise bionic limb control. The Herr lab awaits FDA approval on human implantation of the magnetic beads.

These intertwined initiatives are central to the ambitious mission of the K. Lisa Yang Center for Bionics, established with a $24 million gift from Yang in 2021 to tackle transformative bionic interventions to address an extensive range of human limitations.

Herr is committed to making the broadest possible impact with his technologies. “Shoes and braces hurt, so my group is developing the science of comfort—designing mechanical parts that attach to the body and transfer loads without causing pain.” These inventions may prove useful not just to people living with amputation but to patients suffering from arthritis or other diseases affecting muscles, joints, and bones, whether in lower limbs or arms and hands.

The Yang Center aims to make prosthetic and orthotic devices more accessible globally, so Herr’s group is ramping up services in Sierra Leone, where civil war left tens of thousands missing limbs after devastating machete attacks. “We’re educating clinicians, helping with supply chain infrastructure, introducing novel assistive technology, and developing mobile delivery platforms,” he says.

In the end, says Herr, “I want to be in the business of designing not more and more powerful tools but designing new bodies.” Herr uses himself as an example: “I walk on two very powerful robots, but they’re not linked to my skeleton, or to my brain, so when I walk it feels like I’m on powerful machines that are not me. What I want is such a marriage between human physiology and electromechanics that a person feels at one with the synthetic, designed content of their body.” and control, with a far greater range of motion possible.”

Modeling the marvelous journey from A to B

This story originally appeared in the Spring 2023 issue of Spectrum.

___

Nidhi Seethapathi was first drawn to using powerful yet simple models to understand elaborate patterns when she learned about Newton’s laws of motion as a high school student in India. She was fascinated by the idea that wonderfully complex behaviors can arise from a set of objects that follow a few elementary rules.

Now an assistant professor at MIT, Seethapathi seeks to capture the intricacies of movement in the real world, using computational modeling as well as input from theory and experimentation. “[Theoretical physicist and Nobel laureate] Richard Feynman ’39 once said, ‘What I cannot create, I do not understand,’” Seethapathi says. “In that same spirit, the way I try to understand movement is by building models that move the way we do.”

Models of locomotion in the real world

Seethapathi—who holds a shared faculty position between the Department of Brain and Cognitive Sciences and the Department of Electrical Engineering and Computer Science’s Faculty of Artificial Intelligence + Decision- Making, which is housed in the Schwarzman College of Computing and the School of Engineering—recalls a moment during her undergraduate years studying mechanical engineering in Mumbai when a professor asked students to pick an aspect of movement to examine in detail. While most of her peers chose to analyze machines, Seethapathi selected the human hand. She was astounded by its versatility, she says, and by the number of variables, referred to by scientists as “degrees of freedom,” that are needed to characterize routine manual tasks. The assignment made her realize that she wanted to explore the diverse ways in which the entire human body can move.

Also an investigator at the McGovern Institute for Brain Research, Seethapathi pursued graduate research at The Ohio State University Movement Lab, where her goal was to identify the key elements of human locomotion. At that time, most people in the field were analyzing simple movements, she says, “but I was interested in broadening the scope of my models to include real-world behavior. Given that movement is so ubiquitous, I wondered: What can this model say about everyday life?”

After earning her PhD from Ohio State in 2018, Seethapathi continued this line of research as a postdoctoral fellow at the University of Pennsylvania. New computer vision tools to track human movement from video footage had just entered the scene, and during her time at UPenn, Seethapathi sought to expand her skillset to include computer vision and applications to movement rehabilitation.

At MIT, Seethapathi continues to extend the range of her studies of human movement, looking at how locomotion can evolve as people grow and age, and how it can adapt to anatomical changes and even adjust to shifts in weather, which can alter ground conditions. Her investigations now encompass other species as part of an effort to determine how creatures with different morphologies and habitats regulate their movements.

The models Seethapathi and her team create make predictions about human movements that can later be verified or refuted by empirical tests. While relatively simple experiments can be carried out on treadmills, her group is developing measurement systems incorporating wearable sensors and video-based sensing to measure movement data that have traditionally been hard to obtain outside the laboratory.

Although Seethapathi says she is primarily driven to uncover the fundamental principles that govern movement behavior, she believes her work also has practical applications.

“When people are treated for a movement disorder, the goal is to impact their movements in the real world,” she says. “We can use our predictive models to see how a particular intervention will affect a person’s trajectory. The hope is that our models can help put the individual on the right track to recovery as early as possible.”