New CRISPR-based tool inserts large DNA sequences at desired sites in cells

Building on the CRISPR gene-editing system, MIT researchers have designed a new tool that can snip out faulty genes and replace them with new ones, in a safer and more efficient way.

Using this system, the researchers showed that they could deliver genes as long as 36,000 DNA base pairs to several types of human cells, as well as to liver cells in mice. The new technique, known as PASTE, could hold promise for treating diseases that are caused by defective genes with a large number of mutations, such as cystic fibrosis.

“It’s a new genetic way of potentially targeting these really hard to treat diseases,” says Omar Abudayyeh, a McGovern Fellow at MIT’s McGovern Institute for Brain Research. “We wanted to work toward what gene therapy was supposed to do at its original inception, which is to replace genes, not just correct individual mutations.”

The new tool combines the precise targeting of CRISPR-Cas9, a set of molecules originally derived from bacterial defense systems, with enzymes called integrases, which viruses use to insert their own genetic material into a bacterial genome.

“Just like CRISPR, these integrases come from the ongoing battle between bacteria and the viruses that infect them,” says Jonathan Gootenberg, also a McGovern Fellow. “It speaks to how we can keep finding an abundance of interesting and useful new tools from these natural systems.”

Gootenberg and Abudayyeh are the senior authors of the new study, which appears today in Nature Biotechnology. The lead authors of the study are MIT technical associates Matthew Yarnall and Rohan Krajeski, former MIT graduate student Eleonora Ioannidi, and MIT graduate student Cian Schmitt-Ulms.

DNA insertion

The CRISPR-Cas9 gene editing system consists of a DNA-cutting enzyme called Cas9 and a short RNA strand that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut. When Cas9 and the guide RNA targeting a disease gene are delivered into cells, a specific cut is made in the genome, and the cells’ DNA repair processes glue the cut back together, often deleting a small portion of the genome.

If a DNA template is also delivered, the cells can incorporate a corrected copy into their genomes during the repair process. However, this process requires cells to make double-stranded breaks in their DNA, which can cause chromosomal deletions or rearrangements that are harmful to cells. Another limitation is that it only works in cells that are dividing, as nondividing cells don’t have active DNA repair processes.

The MIT team wanted to develop a tool that could cut out a defective gene and replace it with a new one without inducing any double-stranded DNA breaks. To achieve this goal, they turned to a family of enzymes called integrases, which viruses called bacteriophages use to insert themselves into bacterial genomes.

For this study, the researchers focused on serine integrases, which can insert huge chunks of DNA, as large as 50,000 base pairs. These enzymes target specific genome sequences known as attachment sites, which function as “landing pads.” When they find the correct landing pad in the host genome, they bind to it and integrate their DNA payload.

In past work, scientists have found it challenging to develop these enzymes for human therapy because the landing pads are very specific, and it’s difficult to reprogram integrases to target other sites. The MIT team realized that combining these enzymes with a CRISPR-Cas9 system that inserts the correct landing site would enable easy reprogramming of the powerful insertion system.

The new tool, PASTE (Programmable Addition via Site-specific Targeting Elements), includes a Cas9 enzyme that cuts at a specific genomic site, guided by a strand of RNA that binds to that site. This allows them to target any site in the genome for insertion of the landing site, which contains 46 DNA base pairs. This insertion can be done without introducing any double-stranded breaks by adding one DNA strand first via a fused reverse transcriptase, then its complementary strand.

Once the landing site is incorporated, the integrase can come along and insert its much larger DNA payload into the genome at that site.

“We think that this is a large step toward achieving the dream of programmable insertion of DNA,” Gootenberg says. “It’s a technique that can be easily tailored both to the site that we want to integrate as well as the cargo.”

Gene replacement

In this study, the researchers showed that they could use PASTE to insert genes into several types of human cells, including liver cells, T cells, and lymphoblasts (immature white blood cells). They tested the delivery system with 13 different payload genes, including some that could be therapeutically useful, and were able to insert them into nine different locations in the genome.

In these cells, the researchers were able to insert genes with a success rate ranging from 5 to 60 percent. This approach also yielded very few unwanted “indels” (insertions or deletions) at the sites of gene integration.

“We see very few indels, and because we’re not making double-stranded breaks, you don’t have to worry about chromosomal rearrangements or large-scale chromosome arm deletions,” Abudayyeh says.

The researchers also demonstrated that they could insert genes in “humanized” livers in mice. Livers in these mice consist of about 70 percent human hepatocytes, and PASTE successfully integrated new genes into about 2.5 percent of these cells.

The DNA sequences that the researchers inserted in this study were up to 36,000 base pairs long, but they believe even longer sequences could also be used. A human gene can range from a few hundred to more than 2 million base pairs, although for therapeutic purposes only the coding sequence of the protein needs to be used, drastically reducing the size of the DNA segment that needs to be inserted into the genome.

“The ability to site-specifically make large genomic integrations is of huge value to both basic science and biotechnology studies. This toolset will, I anticipate, be very enabling for the research community,” says Prashant Mali, a professor of bioengineering at the University of California at San Diego, who was not involved in the study.

The researchers are now further exploring the possibility of using this tool as a possible way to replace the defective cystic fibrosis gene. This technique could also be useful for treating blood diseases caused by faulty genes, such as hemophilia and G6PD deficiency, or Huntington’s disease, a neurological disorder caused by a defective gene that has too many gene repeats.

The researchers have also made their genetic constructs available online for other scientists to use.

“One of the fantastic things about engineering these molecular technologies is that people can build on them, develop and apply them in ways that maybe we didn’t think of or hadn’t considered,” Gootenberg says. “It’s really great to be part of that emerging community.”

The research was funded by a Swiss National Science Foundation Postdoc Mobility Fellowship, the U.S. National Institutes of Health, the McGovern Institute Neurotechnology Program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold and Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve Seed Fund for Science Innovation, Impetus Grants, a Cystic Fibrosis Foundation Pioneer Grant, Google Ventures, Fast Grants, the Harvey Family Foundation, and the McGovern Institute.

Ila Fiete wins Swartz Prize for Theoretical and Computational Neuroscience

The Society for Neuroscience (SfN) has awarded the Swartz Prize for Theoretical and Computational Neuroscience to Ila Fiete, professor in the Department of Brain and Cognitive Sciences, associate member of the McGovern Institute for Brain Research, and director of the K. Lisa Yang Integrative Computational Neuroscience Center. The SfN, the world’s largest neuroscience organization, announced that Fiete received the prize for her breakthrough research modeling hippocampal grid cells, a component of the navigational system of the mammalian brain.

“Fiete’s body of work has already significantly shaped the field of neuroscience and will continue to do so for the foreseeable future,” states the announcement from SfN.

“Fiete is considered one of the strongest theorists of her generation who has conducted highly influential work demonstrating that grid cell networks have attractor-like dynamics,” says Hollis Cline, a professor at the Scripps Research Institute of California and head of the Swartz Prize selection committee.

Grid cells are found in the cortex of all mammals. Their unique firing properties, creating a neural representation of our surroundings, allow us to navigate the world. Fiete and collaborators developed computational models showing how interactions between neurons can lead to the formation of periodic lattice-like firing patterns of grid cells and stabilize these patterns to create spatial memory. They showed that as we move around in space, these neural patterns can integrate velocity signals to provide a constantly updated estimate of our position, as well as detect and correct errors in the estimated position.

Fiete also proposed that multiple copies of these patterns at different spatial scales enabled efficient and high-capacity representation. Next, Fiete and colleagues worked with multiple collaborators to design experimental tests and establish rare evidence that these pattern-forming mechanisms underlie the function of memory pattern dynamics in the brain.

“I’m truly honored to receive the Swartz Prize,” says Fiete. “This prize recognizes my group’s efforts to decipher the circuit-level mechanisms of cognitive functions involving navigation, integration, and memory. It also recognizes, in its focus, the bearing-of-fruit of dynamical circuit models from my group and others that explain how individually simple elements combine to generate the longer-lasting memory states and complex computations of the brain. I am proud to be able to represent, in some measure, the work of my incredible students, postdocs, collaborators, and intellectual mentors. I am indebted to them and grateful for the chance to work together.”

According to the SfN announcement, Fiete has contributed to the field in many other ways, including modeling “how entorhinal cortex could interact with the hippocampus to efficiently and robustly store large numbers of memories and developed a remarkable method to discern the structure of intrinsic dynamics in neuronal circuits.” This modeling led to the discovery of an internal compass that tracks the direction of one’s head, even in the absence of external sensory input.

“Recently, Fiete’s group has explored the emergence of modular organization, a line of work that elucidates how grid cell modularity and general cortical modules might self-organize from smooth genetic gradients,” states the SfN announcement. Fiete and her research group have shown that even if the biophysical properties underlying grid cells of different scale are mostly similar, continuous variations in these properties can result in discrete groupings of grid cells, each with a different function.

Fiete was recognized with the Swartz Prize, which includes a $30,000 award, during the SfN annual meeting in San Diego.

Other recent MIT winners of the Swartz Prize include Professor Emery Brown (2020) and Professor Tomaso Poggio (2014).

How touch dampens the brain’s response to painful stimuli

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

When we press our temples to soothe an aching head or rub an elbow after an unexpected blow, it often brings some relief. It is believed that pain-responsive cells in the brain quiet down when these neurons also receive touch inputs, say scientists at MIT’s McGovern Institute, who for the first time have watched this phenomenon play out in the brains of mice.

The team’s discovery, reported November 16, 2022, in the journal Science Advances, offers researchers a deeper understanding of the complicated relationship between pain and touch and could offer some insights into chronic pain in humans. “We’re interested in this because it’s a common human experience,” says McGovern Investigator Fan Wang. “When some part of your body hurts, you rub it, right? We know touch can alleviate pain in this way.” But, she says, the phenomenon has been very difficult for neuroscientists to study.

Modeling pain relief

Touch-mediated pain relief may begin in the spinal cord, where prior studies have found pain-responsive neurons whose signals are dampened in response to touch. But there have been hints that the brain was involved too. Wang says this aspect of the response has been largely unexplored, because it can be hard to monitor the brain’s response to painful stimuli amidst all the other neural activity happening there—particularly when an animal moves.

So while her team knew that mice respond to a potentially painful stimulus on the cheek by wiping their faces with their paws, they couldn’t follow the specific pain response in the animals’ brains to see if that rubbing helped settle it down. “If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal,” Wang explains.

She and her colleagues have found a way around this obstacle. Instead of studying the effects of face-rubbing, they have focused their attention on a subtler form of touch: the gentle vibrations produced by the movement of the animals’ whiskers. Mice use their whiskers to explore, moving them back and forth in a rhythmic motion known as whisking to feel out their environment. This motion activates touch receptors in the face and sends information to the brain in the form of vibrotactile signals. The human brain receives the same kind of touch signals when a person shakes their hand as they pull it back from a painfully hot pan—another way we seek touch-mediate pain relief.

If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal, says Wang.

Wang and her colleagues found that this whisker movement alters the way mice respond to bothersome heat or a poke on the face—both of which usually lead to face rubbing. “When the unpleasant stimuli were applied in the presence of their self-generated vibrotactile whisking…they respond much less,” she says. Sometimes, she says, whisking animals entirely ignore these painful stimuli.

In the brain’s somatosensory cortex, where touch and pain signals are processed, the team found signaling changes that seem to underlie this effect. “The cells that preferentially respond to heat and poking are less frequently activated when the mice are whisking,” Wang says. “They’re less likely to show responses to painful stimuli.” Even when whisking animals did rub their faces in response to painful stimuli, the team found that neurons in the brain took more time to adopt the firing patterns associated with that rubbing movement. “When there is a pain stimulation, usually the trajectory the population dynamics quickly moved to wiping. But if you already have whisking, that takes much longer,” Wang says.

Wang notes that even in the fraction of a second before provoked mice begin rubbing their faces, when the animals are relatively still, it can be difficult to sort out which brain signals are related to perceiving heat and poking and which are involved in whisker movement. Her team developed computational tools to disentangle these, and are hoping other neuroscientists will use the new algorithms to make sense of their own data.

Whisking’s effects on pain signaling seem to depend on dedicated touch-processing circuitry that sends tactile information to the somatosensory cortex from a brain region called the ventral posterior thalamus. When the researchers blocked that pathway, whisking no longer dampened the animals’ response to painful stimuli. Now, Wang says, she and her team are eager to learn how this circuitry works with other parts of the brain to modulate the perception and response to painful stimuli.

Wang says the new findings might shed light on a condition called thalamic pain syndrome, a chronic pain disorder that can develop in patients after a stroke that affects the brain’s thalamus. “Such strokes may impair the functions of thalamic circuits that normally relay pure touch signals and dampen painful signals to the cortex,” she says.

Not every reader’s struggle is the same

Many children struggle to learn to read, and studies have shown that students from a lower socioeconomic status (SES) background are more likely to have difficulty than those from a higher SES background.

MIT neuroscientists have now discovered that the types of difficulties that lower-SES students have with reading, and the underlying brain signatures, are, on average, different from those of higher-SES students who struggle with reading.

In a new study, which included brain scans of more than 150 children as they performed tasks related to reading, researchers found that when students from higher SES backgrounds struggled with reading, it could usually be explained by differences in their ability to piece sounds together into words, a skill known as phonological processing.

However, when students from lower SES backgrounds struggled, it was best explained by differences in their ability to rapidly name words or letters, a task associated with orthographic processing, or visual interpretation of words and letters. This pattern was further confirmed by brain activation during phonological and orthographic processing.

These differences suggest that different types of interventions may needed for different groups of children, the researchers say. The study also highlights the importance of including a wide range of SES levels in studies of reading or other types of academic learning.

“Within the neuroscience realm, we tend to rely on convenience samples of participants, so a lot of our understanding of the neuroscience components of reading in general, and reading disabilities in particular, tends to be based on higher-SES families,” says Rachel Romeo, a former graduate student in the Harvard-MIT Program in Health Sciences and Technology and the lead author of the study. “If we only look at these nonrepresentative samples, we can come away with a relatively biased view of how the brain works.”

Romeo is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland. John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT, is the senior author of the paper, which appears today in the journal Developmental Cognitive Neuroscience.

Components of reading

For many years, researchers have known that children’s scores on standardized assessments of reading are correlated with socioeconomic factors such as school spending per student or the number of children at the school who qualify for free or reduced-price lunches.

Studies of children who struggle with reading, mostly done in higher-SES environments, have shown that the aspect of reading they struggle with most is phonological awareness: the understanding of how sounds combine to make a word, and how sounds can be split up and swapped in or out to make new words.

“That’s a key component of reading, and difficulty with phonological processing is often one of the hallmarks of dyslexia or other reading disorders,” Romeo says.

In the new study, the MIT team wanted to explore how SES might affect phonological processing as well as another key aspect of reading, orthographic processing. This relates more to the visual components of reading, including the ability to identify letters and read words.

To do the study, the researchers recruited first and second grade students from the Boston area, making an effort to include a range of SES levels. For the purposes of this study, SES was assessed by parents’ total years of formal education, which is commonly used as a measure of the family’s SES.

“We went into this not necessarily with any hypothesis about how SES might relate to the two types of processing, but just trying to understand whether SES might be impacting one or the other more, or if it affects both types the same,” Romeo says.

The researchers first gave each child a series of standardized tests designed to measure either phonological processing or orthographic processing. Then, they performed fMRI scans of each child while they carried out additional phonological or orthographic tasks.

The initial series of tests allowed the researchers to determine each child’s abilities for both types of processing, and the brain scans allowed them to measure brain activity in parts of the brain linked with each type of processing.

The results showed that at the higher end of the SES spectrum, differences in phonological processing ability accounted for most of the differences between good readers and struggling readers. This is consistent with the findings of previous studies of reading difficulty. In those children, the researchers also found greater differences in activity in the parts of the brain responsible for phonological processing.

However, the outcomes were different when the researchers analyzed the lower end of the SES spectrum. There, the researchers found that variance in orthographic processing ability accounted for most of the differences between good readers and struggling readers. MRI scans of these children revealed greater differences in brain activity in parts of the brain that are involved in orthographic processing.

Optimizing interventions

There are many possible reasons why a lower SES background might lead to difficulties in orthographic processing, the researchers say. It might be less exposure to books at home, or limited access to libraries and other resources that promote literacy. For children from this background who struggle with reading, different types of interventions might benefit them more than the ones typically used for children who have difficulty with phonological processing.

In a 2017 study, Gabrieli, Romeo, and others found that a summer reading intervention that focused on helping students develop the sensory and cognitive processing necessary for reading was more beneficial for students from lower-SES backgrounds than children from higher-SES backgrounds. Those findings also support the idea that tailored interventions may be necessary for individual students, they say.

“There are two major reasons we understand that cause children to struggle as they learn to read in these early grades. One of them is learning differences, most prominently dyslexia, and the other one is socioeconomic disadvantage,” Gabrieli says. “In my mind, schools have to help all these kinds of kids become the best readers they can, so recognizing the source or sources of reading difficulty ought to inform practices and policies that are sensitive to these differences and optimize supportive interventions.”

Gabrieli and Romeo are now working with researchers at the Harvard University Graduate School of Education to evaluate language and reading interventions that could better prepare preschool children from lower SES backgrounds to learn to read. In her new lab at the University of Maryland, Romeo also plans to further delve into how different aspects of low SES contribute to different areas of language and literacy development.

“No matter why a child is struggling with reading, they need the education and the attention to support them. Studies that try to tease out the underlying factors can help us in tailoring educational interventions to what a child needs,” she says.

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, and the National Institutes of Health.

RNA-activated protein cutter protects bacteria from infection

Our growing understanding of the ways bacteria defend themselves against viruses continues to change the way scientists work and offer new opportunities to improve human health. Ancient immune systems known as CRISPR systems have already been widely adopted as powerful genome editing tools, and the CRISPR toolkit is continuing to expand. Now, scientists at MIT’s McGovern Institute have uncovered an unexpected and potentially useful tool that some bacteria use to respond to infection: an RNA-activated protein-cutting enzyme.

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

The enzyme is part of a CRISPR system discovered last year by McGovern Fellows Omar Abudayyeh and Jonathan Gootenberg. The system, found in bacteria from Tokyo Bay, originally caught their interest because of the precision with which its RNA-activated enzyme cuts RNA. That enzyme, Cas7-11, is considered a promising tool for editing RNA for both research and potential therapeutics. Now, the same researchers have taken a closer look at this bacterial immune system and found that once Cas7-11 has been activated by the right RNA, it also turns on an enzyme that snips apart a particular bacterial protein.

That makes the Cas7-11 system notably more complex than better-studied CRISPR systems, which protect bacteria simply by chopping up the genetic material of an invading virus. “This is a much more elegant and complex signaling mechanism to really defend the bacteria,” Abudayyeh says. A team led by Abudayyeh, Gootenberg, and collaborator Hiroshi Nishimasu at the University of Tokyo report these findings in the November 3, 2022, issue of the journal Science.

Protease programming

The team’s experiments reveal that in bacteria, activation of the protein-cutting enzyme, known as a protease, triggers a series of events that ultimately slow the organism’s growth. But the components of the CRISPR system can be engineered to achieve different outcomes. Gootenberg and Abudayyeh have already programmed the RNA-activated protease to report on the presence of specific RNAs in mammalian cells. With further adaptations, they say it might one day be used to diagnose or treat disease.

The discovery grew out of the researchers’ curiosity about how bacteria protect themselves from infection using Cas7-11. They knew that the enzyme was capable of cutting viral RNA, but there were hints that something more might be going on. They wondered whether a set of genes that clustered near the Cas7-11 gene might also be involved in the bacteria’s infection response, and when graduate students Cian Schmitt-Ulms and Kaiyi Jiang began experimenting with those proteins, they discovered that they worked with Cas7-11 to execute a surprisingly elaborate response to a target RNA.

One of those proteins was the protease Csx29. In the team’s test tube experiments, Csx29 and Cas7-11 couldn’t cut anything on their own—but in the presence of a target RNA, Cas7-11 switched it on. Even then, when the researchers mixed the protease with Cas7-11 and its RNA target and allowed them to mingle with other proteins, most of the proteins remained intact. But one, a protein called Csx30, was reliably snipped apart by the protein-cutting enzyme.

Their experiments had uncovered an enzyme that cut a specific protein, but only in the presence of its particular target RNA. It was unusual—and potentially useful. “That was when we knew we were onto something,” Abudayyeh says.

As the team continued to explore the system, they found that the Csx29’s RNA-activated cut frees a fragment of Csx30 that then works with other bacterial proteins to execute a key aspect of the bacteria’s response to infection—slowing down growth. “Our growth experiments suggest that the cleavage is modulating the bacteria’s stress response in some way,” Gootenberg says.

The scientists quickly recognized that this RNA-activated protease could have uses beyond its natural role in antiviral defense. They have shown that the system can be adapted so that when the protease cuts Csx30 in the presence of its target RNA, it generates an easy to detect fluorescent signal. Because Cas7-11 can be directed to recognize any target RNA, researchers can program the system to detect and report on any RNA of interest. And even though the original system evolved in bacteria, this RNA sensor works well in mammalian cells.

Gootenberg and Abudayyeh say understanding this surprisingly elaborate CRISPR system opens new possibilities by adding to scientists’ growing toolkit of RNA-guided enzymes. “We’re excited to see how people use these tools and how they innovate on them,” Gootenberg says. It’s easy to imagine both diagnostic and therapeutic applications, they say. For example, an RNA sensor could detect signatures of disease in patient samples or to limit delivery of a potential therapy to specific types of cells, enabling that drug to carry out its work without side effects.

In addition to Gootenberg, Abudayyeh, Schmitt-Ulms, and Jiang, Abudayyeh-Gootenberg lab postdoc Nathan Wenyuan Zhou contributed to the project. This work was supported by NIH grants 1R21-AI149694, R01-EB031957, and R56-HG011857, the McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, the MIT John W. Jarve (1978) Seed Fund for Science Innovation, the Cystic Fibrosis Foundation, Google Ventures, Impetus Grants, the NHGRI/TDCC Opportunity Fund, and the McGovern Institute.

Study urges caution when comparing neural networks to the brain

Neural networks, a type of computing system loosely modeled on the organization of the human brain, form the basis of many artificial intelligence systems for applications such speech recognition, computer vision, and medical image analysis.

In the field of neuroscience, researchers often use neural networks to try to model the same kind of tasks that the brain performs, in hopes that the models could suggest new hypotheses regarding how the brain itself performs those tasks. However, a group of researchers at MIT is urging that more caution should be taken when interpreting these models.

In an analysis of more than 11,000 neural networks that were trained to simulate the function of grid cells — key components of the brain’s navigation system — the researchers found that neural networks only produced grid-cell-like activity when they were given very specific constraints that are not found in biological systems.

“What this suggests is that in order to obtain a result with grid cells, the researchers training the models needed to bake in those results with specific, biologically implausible implementation choices,” says Rylan Schaeffer, a former senior research associate at MIT.

Without those constraints, the MIT team found that very few neural networks generated grid-cell-like activity, suggesting that these models do not necessarily generate useful predictions of how the brain works.

Schaeffer, who is now a graduate student in computer science at Stanford University, is the lead author of the new study, which will be presented at the 2022 Conference on Neural Information Processing Systems this month. Ila Fiete, a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper. Mikail Khona, an MIT graduate student in physics, is also an author.

Ila Fiete leads a discussion in her lab at the McGovern Institute. Photo: Steph Stevens

Modeling grid cells

Neural networks, which researchers have been using for decades to perform a variety of computational tasks, consist of thousands or millions of processing units connected to each other. Each node has connections of varying strengths to other nodes in the network. As the network analyzes huge amounts of data, the strengths of those connections change as the network learns to perform the desired task.

In this study, the researchers focused on neural networks that have been developed to mimic the function of the brain’s grid cells, which are found in the entorhinal cortex of the mammalian brain. Together with place cells, found in the hippocampus, grid cells form a brain circuit that helps animals know where they are and how to navigate to a different location.

Place cells have been shown to fire whenever an animal is in a specific location, and each place cell may respond to more than one location. Grid cells, on the other hand, work very differently. As an animal moves through a space such as a room, grid cells fire only when the animal is at one of the vertices of a triangular lattice. Different groups of grid cells create lattices of slightly different dimensions, which overlap each other. This allows grid cells to encode a large number of unique positions using a relatively small number of cells.

This type of location encoding also makes it possible to predict an animal’s next location based on a given starting point and a velocity. In several recent studies, researchers have trained neural networks to perform this same task, which is known as path integration.

To train neural networks to perform this task, researchers feed into it a starting point and a velocity that varies over time. The model essentially mimics the activity of an animal roaming through a space, and calculates updated positions as it moves. As the model performs the task, the activity patterns of different units within the network can be measured. Each unit’s activity can be represented as a firing pattern, similar to the firing patterns of neurons in the brain.

In several previous studies, researchers have reported that their models produced units with activity patterns that closely mimic the firing patterns of grid cells. These studies concluded that grid-cell-like representations would naturally emerge in any neural network trained to perform the path integration task.

However, the MIT researchers found very different results. In an analysis of more than 11,000 neural networks that they trained on path integration, they found that while nearly 90 percent of them learned the task successfully, only about 10 percent of those networks generated activity patterns that could be classified as grid-cell-like. That includes networks in which even only a single unit achieved a high grid score.

The earlier studies were more likely to generate grid-cell-like activity only because of the constraints that researchers build into those models, according to the MIT team.

“Earlier studies have presented this story that if you train networks to path integrate, you’re going to get grid cells. What we found is that instead, you have to make this long sequence of choices of parameters, which we know are inconsistent with the biology, and then in a small sliver of those parameters, you will get the desired result,” Schaeffer says.

More biological models

One of the constraints found in earlier studies is that the researchers required the model to convert velocity into a unique position, reported by one network unit that corresponds to a place cell. For this to happen, the researchers also required that each place cell correspond to only one location, which is not how biological place cells work: Studies have shown that place cells in the hippocampus can respond to up to 20 different locations, not just one.

When the MIT team adjusted the models so that place cells were more like biological place cells, the models were still able to perform the path integration task, but they no longer produced grid-cell-like activity. Grid-cell-like activity also disappeared when the researchers instructed the models to generate different types of location output, such as location on a grid with X and Y axes, or location as a distance and angle relative to a home point.

“If the only thing that you ask this network to do is path integrate, and you impose a set of very specific, not physiological requirements on the readout unit, then it’s possible to obtain grid cells,” says Fiete, who is also the director of the K. Lisa Yang Integrative Computational Neuroscience Center at MIT. “But if you relax any of these aspects of this readout unit, that strongly degrades the ability of the network to produce grid cells. In fact, usually they don’t, even though they still solve the path integration task.”

Therefore, if the researchers hadn’t already known of the existence of grid cells, and guided the model to produce them, it would be very unlikely for them to appear as a natural consequence of the model training.

The researchers say that their findings suggest that more caution is warranted when interpreting neural network models of the brain.

“When you use deep learning models, they can be a powerful tool, but one has to be very circumspect in interpreting them and in determining whether they are truly making de novo predictions, or even shedding light on what it is that the brain is optimizing,” Fiete says.

Kenneth Harris, a professor of quantitative neuroscience at University College London, says he hopes the new study will encourage neuroscientists to be more careful when stating what can be shown by analogies between neural networks and the brain.

“Neural networks can be a useful source of predictions. If you want to learn how the brain solves a computation, you can train a network to perform it, then test the hypothesis that the brain works the same way. Whether the hypothesis is confirmed or not, you will learn something,” says Harris, who was not involved in the study. “This paper shows that ‘postdiction’ is less powerful: Neural networks have many parameters, so getting them to replicate an existing result is not as surprising.”

When using these models to make predictions about how the brain works, it’s important to take into account realistic, known biological constraints when building the models, the MIT researchers say. They are now working on models of grid cells that they hope will generate more accurate predictions of how grid cells in the brain work.

“Deep learning models will give us insight about the brain, but only after you inject a lot of biological knowledge into the model,” Khona says. “If you use the correct constraints, then the models can give you a brain-like solution.”

The research was funded by the Office of Naval Research, the National Science Foundation, the Simons Foundation through the Simons Collaboration on the Global Brain, and the Howard Hughes Medical Institute through the Faculty Scholars Program. Mikail Khona was supported by the MathWorks Science Fellowship.

RNA-sensing system controls protein expression in cells based on specific cell states

Researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have developed a system that can detect a particular RNA sequence in live cells and produce a protein of interest in response. Using the technology, the team showed how they could identify specific cell types, detect and measure changes in the expression of individual genes, track transcriptional states, and control the production of proteins encoded by synthetic mRNA.

The platform, called Reprogrammable ADAR Sensors, or RADARS, even allowed the team to target and kill a specific cell type. The team said RADARS could one day help researchers detect and selectively kill tumor cells, or edit the genome in specific cells. The study appears today in Nature Biotechnology and was led by co-first authors Kaiyi Jiang (MIT), Jeremy Koob (Broad), Xi Chen (Broad), Rohan Krajeski (MIT), and Yifan Zhang (Broad).

“One of the revolutions in genomics has been the ability to sequence the transcriptomes of cells,” said Fei Chen, a core institute member at the Broad, Merkin Fellow, assistant professor at Harvard University, and co-corresponding author on the study. “That has really allowed us to learn about cell types and states. But, often, we haven’t been able to manipulate those cells specifically. RADARS is a big step in that direction.”

“Right now, the tools that we have to leverage cell markers are hard to develop and engineer,” added Omar Abudayyeh, a McGovern Institute Fellow and co-corresponding author on the study. “We really wanted to make a programmable way of sensing and responding to a cell state.”

Jonathan Gootenberg, who is also a McGovern Institute Fellow and co-corresponding author, says that their team was eager to build a tool to take advantage of all the data provided by single-cell RNA sequencing, which has revealed a vast array of cell types and cell states in the body.

“We wanted to ask how we could manipulate cellular identities in a way that was as easy as editing the genome with CRISPR,” he said. “And we’re excited to see what the field does with it.” 

Omar Abudayyeh, Jonathan Gootenberg and Fei Chen at the Broad Institute
Study authors (from left to right) Omar Abudayyeh, Jonathan Gootenberg, and Fei Chen. Photo: Namrita Sengupta

Repurposing RNA editing

The RADARS platform generates a desired protein when it detects a specific RNA by taking advantage of RNA editing that occurs naturally in cells.

The system consists of an RNA containing two components: a guide region, which binds to the target RNA sequence that scientists want to sense in cells, and a payload region, which encodes the protein of interest, such as a fluorescent signal or a cell-killing enzyme. When the guide RNA binds to the target RNA, this generates a short double-stranded RNA sequence containing a mismatch between two bases in the sequence — adenosine (A) and cytosine (C). This mismatch attracts a naturally occurring family of RNA-editing proteins called adenosine deaminases acting on RNA (ADARs).

In RADARS, the A-C mismatch appears within a “stop signal” in the guide RNA, which prevents the production of the desired payload protein. The ADARs edit and inactivate the stop signal, allowing for the translation of that protein. The order of these molecular events is key to RADARS’s function as a sensor; the protein of interest is produced only after the guide RNA binds to the target RNA and the ADARs disable the stop signal.

The team tested RADARS in different cell types and with different target sequences and protein products. They found that RADARS distinguished between kidney, uterine, and liver cells, and could produce different fluorescent signals as well as a caspase, an enzyme that kills cells. RADARS also measured gene expression over a large dynamic range, demonstrating their utility as sensors.

Most systems successfully detected target sequences using the cell’s native ADAR proteins, but the team found that supplementing the cells with additional ADAR proteins increased the strength of the signal. Abudayyeh says both of these cases are potentially useful; taking advantage of the cell’s native editing proteins would minimize the chance of off-target editing in therapeutic applications, but supplementing them could help produce stronger effects when RADARS are used as a research tool in the lab.

On the radar

Abudayyeh, Chen, and Gootenberg say that because both the guide RNA and payload RNA are modifiable, others can easily redesign RADARS to target different cell types and produce different signals or payloads. They also engineered more complex RADARS, in which cells produced a protein if they sensed two RNA sequences and another if they sensed either one RNA or another. The team adds that similar RADARS could help scientists detect more than one cell type at the same time, as well as complex cell states that can’t be defined by a single RNA transcript.

Ultimately, the researchers hope to develop a set of design rules so that others can more easily develop RADARS for their own experiments. They suggest other scientists could use RADARS to manipulate immune cell states, track neuronal activity in response to stimuli, or deliver therapeutic mRNA to specific tissues.

“We think this is a really interesting paradigm for controlling gene expression,” said Chen. “We can’t even anticipate what the best applications will be. That really comes from the combination of people with interesting biology and the tools you develop.”

This work was supported by the The McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, Massachusetts Institute of Technology, Impetus Grants, the Cystic Fibrosis Foundation, Google Ventures, FastGrants, the McGovern Institute, National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, and the Merkin Institute.

Magnetic sensors track muscle length

Using a simple set of magnets, MIT researchers have come up with a sophisticated way to monitor muscle movements, which they hope will make it easier for people with amputations to control their prosthetic limbs.

In a new pair of papers, the researchers demonstrated the accuracy and safety of their magnet-based system, which can track the length of muscles during movement. The studies, performed in animals, offer hope that this strategy could be used to help people with prosthetic devices control them in a way that more closely mimics natural limb movement.

“These recent results demonstrate that this tool can be used outside the lab to track muscle movement during natural activity, and they also suggest that the magnetic implants are stable and biocompatible and that they don’t cause discomfort,” says Cameron Taylor, an MIT research scientist and co-lead author of both papers.

McGovern Institute Associate Investigator Hugh Herr. Photo: Jimmy Day / MIT Media Lab

In one of the studies, the researchers showed that they could accurately measure the lengths of turkeys’ calf muscles as the birds ran, jumped, and performed other natural movements. In the other study, they showed that the small magnetic beads used for the measurements do not cause inflammation or other adverse effects when implanted in muscle.

“I am very excited for the clinical potential of this new technology to improve the control and efficacy of bionic limbs for persons with limb-loss,” says Hugh Herr, a professor of media arts and sciences, co-director of the K. Lisa Yang Center for Bionics at MIT, and an associate member of MIT’s McGovern Institute for Brain Research.

Herr is a senior author of both papers, which appear today in the journal Frontiers in Bioengineering and Biotechnology. Thomas Roberts, a professor of ecology, evolution, and organismal biology at Brown University, is a senior author of the measurement study.

Tracking movement

Currently, powered prosthetic limbs are usually controlled using an approach known as surface electromyography (EMG). Electrodes attached to the surface of the skin or surgically implanted in the residual muscle of the amputated limb measure electrical signals from a person’s muscles, which are fed into the prosthesis to help it move the way the person wearing the limb intends.

However, that approach does not take into account any information about the muscle length or velocity, which could help to make the prosthetic movements more accurate.

Several years ago, the MIT team began working on a novel way to perform those kinds of muscle measurements, using an approach that they call magnetomicrometry. This strategy takes advantage of the permanent magnetic fields surrounding small beads implanted in a muscle. Using a credit-card-sized, compass-like sensor attached to the outside of the body, their system can track the distances between the two magnets. When a muscle contracts, the magnets move closer together, and when it flexes, they move further apart.

The new muscle measuring approach takes advantage of the magnetic attraction between two small beads implanted in a muscle. Using a small sensor attached to the outside of the body, the system can track the distances between the two magnets as the muscle contracts and flexes. Image: Hugh Herr

In a study published last year, the researchers showed that this system could be used to accurately measure small ankle movements when the beads were implanted in the calf muscles of turkeys. In one of the new studies, the researchers set out to see if the system could make accurate measurements during more natural movements in a nonlaboratory setting.

To do that, they created an obstacle course of ramps for the turkeys to climb and boxes for them to jump on and off of. The researchers used their magnetic sensor to track muscle movements during these activities, and found that the system could calculate muscle lengths in less than a millisecond.

They also compared their data to measurements taken using a more traditional approach known as fluoromicrometry, a type of X-ray technology that requires much larger equipment than magnetomicrometry. The magnetomicrometry measurements varied from those generated by fluoromicrometry by less than a millimeter, on average.

“We’re able to provide the muscle-length tracking functionality of the room-sized X-ray equipment using a much smaller, portable package, and we’re able to collect the data continuously instead of being limited to the 10-second bursts that fluoromicrometry is limited to,” Taylor says.

Seong Ho Yeon, an MIT graduate student, is also a co-lead author of the measurement study. Other authors include MIT Research Support Associate Ellen Clarrissimeaux and former Brown University postdoc Mary Kate O’Donnell.

Biocompatibility

In the second paper, the researchers focused on the biocompatibility of the implants. They found that the magnets did not generate tissue scarring, inflammation, or other harmful effects. They also showed that the implanted magnets did not alter the turkeys’ gaits, suggesting they did not produce discomfort. William Clark, a postdoc at Brown, is the co-lead author of the biocompatibility study.

The researchers also showed that the implants remained stable for eight months, the length of the study, and did not migrate toward each other, as long as they were implanted at least 3 centimeters apart. The researchers envision that the beads, which consist of a magnetic core coated with gold and a polymer called Parylene, could remain in tissue indefinitely once implanted.

“Magnets don’t require an external power source, and after implanting them into the muscle, they can maintain the full strength of their magnetic field throughout the lifetime of the patient,” Taylor says.

The researchers are now planning to seek FDA approval to test the system in people with prosthetic limbs. They hope to use the sensor to control prostheses similar to the way surface EMG is used now: Measurements regarding the length of muscles will be fed into the control system of a prosthesis to help guide it to the position that the wearer intends.

“The place where this technology fills a need is in communicating those muscle lengths and velocities to a wearable robot, so that the robot can perform in a way that works in tandem with the human,” Taylor says. “We hope that magnetomicrometry will enable a person to control a wearable robot with the same comfort level and the same ease as someone would control their own limb.”

In addition to prosthetic limbs, those wearable robots could include robotic exoskeletons, which are worn outside the body to help people move their legs or arms more easily.

The research was funded by the Salah Foundation, the K. Lisa Yang Center for Bionics at MIT, the MIT Media Lab Consortia, the National Institutes of Health, and the National Science Foundation.

Unlocking the mysteries of how neurons learn

When he matriculated in 2019 as a graduate student, Raúl Mojica Soto-Albors was no stranger to MIT. He’d spent time here on multiple occasions as an undergraduate at the University of Puerto Rico at Mayagüez, including eight months in 2018 as a displaced student after Hurricane Maria in 2017. Those experiences — including participating in the MIT Summer Research Bio Program (MSRP-Bio), which offers a funded summer research experience to underrepresented minorities and other underserved students — not only changed his course of study; they also empowered him to pursue a PhD.

“The summer program eased a lot of my worries about what science would be like, because I had never been immersed in an environment like MIT’s,” he says. “I thought it would be too intense and I wouldn’t be able to make it. But, in reality, it is just a bunch of people following their passions. And so, as long as you are following your passion, you are going to be pretty happy and productive.”

Mojica is now following his passion as a doctoral student in the MIT Department of Brain and Cognitive Sciences, using a complex electrophysiology method termed “patch clamp” to investigate neuronal activity in vivo. “It has all the stuff which we historically have not paid much attention to,” he explains. “Neuroscientists have been very focused on the spiking of the neuron. But I am concentrating instead on patterns in the subthreshold activity of neurons.”

Opening a door to neuroscience

Mojica’s affinity for science blossomed in childhood. Even though his parents encouraged him, he says, “It was a bit difficult as I did not have someone in science in my family. There was no one [like that] who I could go to for guidance.” In college, he became interested in the parameters of human behavior and decided to major in psychology. At the same time, he was curious about biology. “As I was learning about psychology,” he says. “I kept wondering how we, as human beings, emerge from such a mess of interacting neurons.”

His journey at MIT began in January 2017, when he was invited to attend the Center for Brains, Minds and Machines Quantitative Biology Methods Program, an intensive, weeklong program offered to underrepresented students of color to prepare them for scientific careers. Even though he had taken a Python class at the University of Puerto Rico and completed some online courses, he says, “This was the first instance where I had to develop my own tools and learn how to use a programming language to my advantage.”

The program also dramatically changed the course of his undergraduate career, thanks to conversations with Mandana Sassanfar, a biology lecturer and the program’s coordinator, about his future goals. “She advised me to change to majors to biology, as the psychology component is a little bit easier to read up on than missing the foundational biology classes,” he says. She also recommended that he apply to MSRP.

Mojica promptly took her advice, and he returned to MIT in the summer of 2017 as an MSRP student working in the lab of Associate Professor Mark Harnett in the Department of Brain and Cognitive Sciences and the McGovern Institute. There, he focused on performing calcium imaging on the retro splenial cortex to understand the role of neurons in navigating a complex spatial environment. The experience was eye-opening; there are very few specialized programs at UPRM, notes Mojica, which limited his exposure to interdisciplinary subjects. “That was my door into neuroscience, which I otherwise would have never been able to get into.”

Weathering the storm

Mojica returned home to begin his senior year, but shortly thereafter, in September 2017, Hurricane Maria hit Puerto Rico and devastated the community. “The island was dealing with blackouts almost a year after the hurricane, and they are still dealing with them today. It makes it really difficult, for example, for people who rely on electricity for oxygen or to refrigerate their diabetes medicine,” he says. “[My family] was lucky to have electricity reliably four months after the hurricane. But I had a lot of people around me who spent eight, nine, 10 months without electricity,” he says.

The hurricane’s destruction disrupted every aspect of life, including education. MIT offered its educational resources by hosting several 2017 MSRP students from Puerto Rico for the spring semester, including Mojica. He moved back to campus in February 2018, finished up his fall term university exams, and took classes and did research throughout the spring and summer of that year.

“That was when I first got some culture shock and felt homesick,” he notes. Thankfully, he was not alone. He befriended another student from Puerto Rico who helped him through that tough time. They understood and supported each other, as both of their families were navigating the challenges of a post-hurricane island. Mojica says, “We had just come out of this mess of the hurricane, and we came [to MIT] and everything was perfect. … It was jarring.”

Despite the immense upheaval in his life, Mojica was determined to pursue a PhD. “I didn’t want to just consume knowledge for the rest of my life,” he says. “I wanted to produce knowledge. I wanted to be on the cutting-edge of something.”

Paying it forward

Now a fourth-year PhD candidate in the Harnett Lab, he’s doing just that, utilizing a classical method termed “patch clamp electrophysiology” in novel ways to investigate neuronal learning. The patch clamp technique allows him to observe activity below the threshold of neuronal firing in mice, something that no other method can do.

“I am studying how single neurons learn and adapt, or plasticize,” Mojica explains. “If I present something new and unexpected to the animal, how does a cell respond? And if I stimulate the cell, can I make it learn something that it didn’t respond to before?” This research could have implications for patient recovery after severe brain injuries. Plasticity is a crucial aspect of brain function. If we could figure out how neurons learn, or even how to plasticize them, we could speed up recovery from life-threatening loss of brain tissue, for example,” he says.

In addition to research, Mojica’s passion for mentorship shines through. His voice lifts as he describes one of his undergraduate mentees, Gabriella, who is now a full-time graduate student in the Harnett lab. He currently mentors MSRP students and advises prospective PhD students on their applications. “When I was navigating the PhD process, I did not have people like me serving as my own mentors,” he notes.

Mojica knows firsthand the impact of mentoring. Even though he never had anyone who could provide guidance about science, his childhood music teacher played an extremely influential role in his early career and always encouraged him to pursue his passions. “He had a lot of knowledge in how to navigate the complicated mess of being 17 or 18 and figuring out what you want to devote the rest of your life to,” he recalls fondly.

Although he’s not sure about his future professional plans, one thing is clear for Mojica: “A big part of it will be mentoring the people who come from similar backgrounds to mine who have less access to opportunities. I want to keep that front and center.”

Understanding reality through algorithms

Although Fernanda De La Torre still has several years left in her graduate studies, she’s already dreaming big when it comes to what the future has in store for her.

“I dream of opening up a school one day where I could bring this world of understanding of cognition and perception into places that would never have contact with this,” she says.

It’s that kind of ambitious thinking that’s gotten De La Torre, a doctoral student in MIT’s Department of Brain and Cognitive Sciences, to this point. A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre has found at MIT a supportive, creative research environment that’s allowed her to delve into the cutting-edge science of artificial intelligence. But she’s still driven by an innate curiosity about human imagination and a desire to bring that knowledge to the communities in which she grew up.

An unconventional path to neuroscience

De La Torre’s first exposure to neuroscience wasn’t in the classroom, but in her daily life. As a child, she watched her younger sister struggle with epilepsy. At 12, she crossed into the United States from Mexico illegally to reunite with her mother, exposing her to a whole new language and culture. Once in the States, she had to grapple with her mother’s shifting personality in the midst of an abusive relationship. “All of these different things I was seeing around me drove me to want to better understand how psychology works,” De La Torre says, “to understand how the mind works, and how it is that we can all be in the same environment and feel very different things.”

But finding an outlet for that intellectual curiosity was challenging. As an undocumented immigrant, her access to financial aid was limited. Her high school was also underfunded and lacked elective options. Mentors along the way, though, encouraged the aspiring scientist, and through a program at her school, she was able to take community college courses to fulfill basic educational requirements.

It took an inspiring amount of dedication to her education, but De La Torre made it to Kansas State University for her undergraduate studies, where she majored in computer science and math. At Kansas State, she was able to get her first real taste of research. “I was just fascinated by the questions they were asking and this entire space I hadn’t encountered,” says De La Torre of her experience working in a visual cognition lab and discovering the field of computational neuroscience.

Although Kansas State didn’t have a dedicated neuroscience program, her research experience in cognition led her to a machine learning lab led by William Hsu, a computer science professor. There, De La Torre became enamored by the possibilities of using computation to model the human brain. Hsu’s support also convinced her that a scientific career was a possibility. “He always made me feel like I was capable of tackling big questions,” she says fondly.

With the confidence imparted in her at Kansas State, De La Torre came to MIT in 2019 as a post-baccalaureate student in the lab of Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences and an investigator at the McGovern Institute for Brain Research. With Poggio, also the director of the Center for Brains, Minds and Machines, De La Torre began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

“It’s a very interesting question because we’re starting to use them everywhere,” says De La Torre of neural networks, listing off examples from self-driving cars to medicine. “But, at the same time, we don’t fully understand how these networks can go from knowing nothing and just being a bunch of numbers to outputting things that make sense.”

Her experience as a post-bac was De La Torre’s first real opportunity to apply the technical computer skills she developed as an undergraduate to neuroscience. It was also the first time she could fully focus on research. “That was the first time that I had access to health insurance and a stable salary. That was, in itself, sort of life-changing,” she says. “But on the research side, it was very intimidating at first. I was anxious, and I wasn’t sure that I belonged here.”

Fortunately, De La Torre says she was able to overcome those insecurities, both through a growing unabashed enthusiasm for the field and through the support of Poggio and her other colleagues in MIT’s Department of Brain and Cognitive Sciences. When the opportunity came to apply to the department’s PhD program, she jumped on it. “It was just knowing these kinds of mentors are here and that they cared about their students,” says De La Torre of her decision to stay on at MIT for graduate studies. “That was really meaningful.”

Expanding notions of reality and imagination

In her two years so far in the graduate program, De La Torre’s work has expanded the understanding of neural networks and their applications to the study of the human brain. Working with Guangyu Robert Yang, an associate investigator at the McGovern Institute and an assistant professor in the departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Sciences, she’s engaged in what she describes as more philosophical questions about how one develops a sense of self as an independent being. She’s interested in how that self-consciousness develops and why it might be useful.

De La Torre’s primary advisor, though, is Professor Josh McDermott, who leads the Laboratory for Computational Audition. With McDermott, De La Torre is attempting to understand how the brain integrates vision and sound. While combining sensory inputs may seem like a basic process, there are many unanswered questions about how our brains combine multiple signals into a coherent impression, or percept, of the world. Many of the questions are raised by audiovisual illusions in which what we hear changes what we see. For example, if one sees a video of two discs passing each other, but the clip contains the sound of a collision, the brain will perceive that the discs are bouncing off, rather than passing through each other. Given an ambiguous image, that simple auditory cue is all it takes to create a different perception of reality.

There’s something interesting happening where our brains are receiving two signals telling us different things and, yet, we have to combine them somehow to make sense of the world.

De La Torre is using behavioral experiments to probe how the human brain makes sense of multisensory cues to construct a particular perception. To do so, she’s created various scenes of objects interacting in 3D space over different sounds, asking research participants to describe characteristics of the scene. For example, in one experiment, she combines visuals of a block moving across a surface at different speeds with various scraping sounds, asking participants to estimate how rough the surface is. Eventually she hopes to take the experiment into virtual reality, where participants will physically push blocks in response to how rough they perceive the surface to be, rather than just reporting on what they experience.

Once she’s collected data, she’ll move into the modeling phase of the research, evaluating whether multisensory neural networks perceive illusions the way humans do. “What we want to do is model exactly what’s happening,” says De La Torre. “How is it that we’re receiving these two signals, integrating them and, at the same time, using all of our prior knowledge and inferences of physics to really make sense of the world?”

Although her two strands of research with Yang and McDermott may seem distinct, she sees clear connections between the two. Both projects are about grasping what artificial neural networks are capable of and what they tell us about the brain. At a more fundamental level, she says that how the brain perceives the world from different sensory cues might be part of what gives people a sense of self. Sensory perception is about constructing a cohesive, unitary sense of the world from multiple sources of sensory data. Similarly, she argues, “the sense of self is really a combination of actions, plans, goals, emotions, all of these different things that are components of their own, but somehow create a unitary being.”

It’s a fitting sentiment for De La Torre, who has been working to make sense of and integrate different aspects of her own life. Working in the Computational Audition lab, for example, she’s started experimenting with combining electronic music with folk music from her native Mexico, connecting her “two worlds,” as she says. Having the space to undertake those kinds of intellectual explorations, and colleagues who encourage it, is one of De La Torre’s favorite parts of MIT.

“Beyond professors, there’s also a lot of students whose way of thinking just amazes me,” she says. “I see a lot of goodness and excitement for science and a little bit of — it’s not nerdiness, but a love for very niche things — and I just kind of love that.”