An ancient RNA-guided system could simplify delivery of gene editing therapies

A vast search of natural diversity has led scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox. These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.

These findings are reported online February 27, 2025 in the journal Science.

“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.

“Nature is pretty incredible,” said Zhang who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.

In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.

At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionarily relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.

Zhang’s team discovered >20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region—its TIGR arrays—encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.

Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.

They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact—a quarter of the size Cas9 on average—making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.

Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.

This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, and Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

How nature organizes itself, from brain cells to ecosystems

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity—a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?

A new study from McGovern Associate Investigator Ila Fiete suggests a surprising answer.

In findings published today in Nature, Fiete, a professor of brain and cognitive sciences and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.

Joining two big ideas

“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona, a former graduate student and K. Lisa Yang ICoN Center Graduate Fellow, and postdoctoral associate Sarthak Chandra also led the study.

Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition—small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.

Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.

Modular systems in the brain

The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale—they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.

A visual depiction of two different modules in grid cells, used to map space at slightly different resolutions. Image: Fiete Lab

No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.

Modular systems in nature

The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread and also vary smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries—distinct ecological “neighborhoods” that don’t overlap.

Fiete’s study suggests why: Local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection—and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.

A self-organizing world

One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same, they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.

The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.

Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”

Seeing more in expansion microscopy

In biology, seeing can lead to understanding, and researchers in Edward Boyden’s lab at MIT’s McGovern Institute are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy—a high-resolution imaging technique the group introduced in 2015—so researchers everywhere can see more when they look at cells and tissues under a light microscope.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT.  “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.

With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.

Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.

Since first developing expansion microscopy, Boyden and his team have continued to enhance the method—increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.

Visualizing cell membranes

One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the February 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.

Tay Shin, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”

The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.

After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.

Traceability of umExM. 3D rendering of 20 manually traced and reconstructed myelinated axons in the corpus callosum. Image: Ed Boyden

Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.

Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.

Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.

One sample, many proteins

To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.

A key to the new method, reported November 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoctoral fellow Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.

After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.

Expansion microscopy lets biologists visualize some of cells’ tiniest features—but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.

“Here’s one of the most famous receptors in all of neuroscience, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience.” – Ed Boyden

To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.

The team used multiExR to look at amyloid plaques—the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease—but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.

Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the US Army, Cancer Research UK, the New York Stem Cell Foundation, the National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures. Samsung, MathWorks, the Collamore-Rogers Fellowship, the National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.

Evelina Fedorenko receives Troland Award from National Academy of Sciences

The National Academy of Sciences (NAS) announced today that McGovern Investigator Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions towards understanding the language network in the human brain.

The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.

Two women and one child looking at a computer screen.
McGovern Investigator Ev Fedorenko (center) looks at a young subject’s brain scan in the Martinos Imaging Center at MIT. Photo: Alexandra Sokhina

Fedorenko, who is an associate professor of brain and cognitive sciences at MIT, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems.  Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.

Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.

She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.

A probabilistic atlas of the human language network based on >800 individuals (center) and sample individual language networks, which illustrate inter-individual variability in the precise locations and shapes of the language areas. Image: Ev Fedorenko

Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington DC.

 

 

 

Scientists engineer CRISPR enzymes that evade the immune system

The core components of CRISPR-based genome-editing therapies are bacterial proteins called nucleases that can stimulate unwanted immune responses in people, increasing the chances of side effects and making these therapies potentially less effective.

Researchers at the Broad Institute of MIT and Harvard and Cyrus Biotechnology have now engineered two CRISPR nucleases, Cas9 and Cas12, to mask them from the immune system. The team identified protein sequences on each nuclease that trigger the immune system and used computational modeling to design new versions that evade immune recognition. The engineered enzymes had similar gene-editing efficiency and reduced immune responses compared to standard nucleases in mice.

Appearing today in Nature Communications, the findings could help pave the way for safer, more efficient gene therapies. The study was led by Feng Zhang, a core institute member at the Broad and an Investigator at the McGovern Institute for Brain Research at MIT.

“As CRISPR therapies enter the clinic, there is a growing need to ensure that these tools are as safe as possible, and this work tackles one aspect of that challenge,” said Zhang, who is also a co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, the James and Patricia Poitras Professor of Neuroscience, and a professor at MIT. He is an Investigator at the Howard Hughes Medical Institute.

Rumya Raghavan, a graduate student in Zhang’s lab when the study began, and Mirco Julian Friedrich, a postdoctoral scholar in Zhang’s lab, were co-first authors on the study.

“People have known for a while that Cas9 causes an immune response, but we wanted to pinpoint which parts of the protein were being recognized by the immune system and then engineer the proteins to get rid of those parts while retaining its function,” said Raghavan.

“Our goal was to use this information to create not only a safer therapy, but one that is potentially even more effective because it is not being eliminated by the immune system before it can do its job,” added Friedrich.

In search of immune triggers

Many CRISPR-based therapies use nucleases derived from bacteria. About 80 percent of people have pre-existing immunity to these proteins through everyday exposure to these bacteria, but scientists didn’t know which parts of the nucleases the immune system recognized.

To find out, Zhang’s team used a specialized type of mass spectrometry to identify and analyze the Cas9 and Cas 12 protein fragments recognized by immune cells. For each of two nucleases — Cas9 from Streptococcus pyogenes and Cas12 from Staphylococcus aureus — they identified three short sequences, about eight amino acids long, that evoked an immune response. They then partnered with Cyrus Biotechnology, a company co-founded by University of Washington biochemist David Baker that develops structure-based computational tools to design proteins that evade the immune response. After Zhang’s team identified immunogenic sequences in Cas9 and Cas12, Cyrus used these computational approaches to design versions of the nucleases that did not include the immune-triggering sequences.

Zhang’s lab used prediction software to validate that the new nucleases were less likely to trigger immune responses. Next, the team engineered a panel of new nucleases informed by these predictions and tested the most promising candidates in human cells and in mice that were genetically modified to bear key components of the human immune system. In both cases, they found that the engineered enzymes resulted in significantly reduced immune responses compared to the original nucleases, but still cut DNA at the same efficiency.

Minimally immunogenic nucleases are just one part of safer gene therapies, Zhang’s team says. In the future, they hope their methods may also help scientists design delivery vehicles to evade the immune system.

This study was funded in part by the Poitras Center for Psychiatric Disorders Research, the K. Lisa. Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience and the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT.

For healthy hearing, timing matters

When soundwaves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Seated man, smiling at camera
McGovern Institute Associate Investigator Josh McDermott. Photo: Justin Knight

Neurons send signals by emitting spikes—brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming soundwaves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The findings, reported December 4, 2024, in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. McGovern Investigator Josh McDermott, who led the research, explains that his team’s models better equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Soundwaves oscillate at rates that determine their pitch: low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that corresponds to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also an associate professor of brain and cognitive sciences at MIT.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally: Animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well—better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

“The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors.” – Josh McDermott

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways—it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”

Personal interests can influence how children’s brains respond to language

A new study from the McGovern Institute shows how interests can modulate language processing in children’s brains and paves the way for personalized brain research.

The paper, which appears in Imaging Neuroscience, was conducted in the lab of McGovern Institute Investigator John Gabrieli, and led by senior author Anila D’Mello, a former McGovern postdoctoral fellow and current assistant professor at the University of Texas Southwestern Medical Center and the University of Texas at Dallas.

“Traditional studies give subjects identical stimuli to avoid confounding the results,” says Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“However, our research tailored stimuli to each child’s interest, eliciting stronger—and more consistent—activity patterns in the brain’s language regions across individuals.” – John Gabrieli

Funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research in MIT’s Yang Tan Collective, this work unveils a new paradigm that challenges current methods and shows how personalization can be a powerful strategy in neuroscience. The paper’s co-first authors are Halie Olson, a postdoctoral associate at the McGovern Institute, and Kristina Johnson, an assistant professor at Northeastern University and former doctoral student at the MIT Media Lab. “Our research integrates participants’ lived experiences into the study design,” says Johnson. “This approach not only enhances the validity of our findings but also captures the diversity of individual perspectives, often overlooked in traditional research.”

Taking interest into account

When it comes to language, our interests are like operators behind the switchboard. They guide what we talk about and who we talk to. Research suggests that interests are also potent motivators and can help improve language skills. For instance, children score higher on reading tests when the material covers topics that are interesting to them.

But neuroscience has shied away from using personal interests to study the brain, especially in the realm of language. This is mainly because interests, which vary between people, could throw a wrench into experimental control—a core principle that drives scientists to limit factors that can muddle the results.

Gabrieli, D’Mello, Olson, and Johnson ventured into this unexplored territory. The team wondered if tailoring language stimuli to children’s interests might lead to higher responses in language regions of the brain. “Our study is unique in its approach to control the kind of brain activity our experiments yield, rather than control the stimuli we give subjects,” says D’Mello. “This stands in stark contrast to most neuroimaging studies that control the stimuli but might introduce differences in each subject’s level of interest in the material.”

Three women posing for photo with brain images in background.
Researchers Halie Olson (left), Kristina Johnson (center), and Anila D’Mello (right). Photo: Caitlin Cunningham

In their recent study, the authors recruited a cohort of 20 children to investigate how personal interests affected the way the brain processes language. Caregivers described their child’s interests to the researchers, spanning baseball, train lines, Minecraft, and musicals. During the study, children listened to audio stories tuned to their unique interests. They were also presented with audio stories about nature (this was not an interest among the children) for comparison. To capture brain activity patterns, the team used functional magnetic resonance imaging (fMRI), which measures changes in blood flow caused by underlying neural activity.

New insights into the brain

“We found that, when children listened to stories about topics they were really interested in, they showed stronger neural responses in language areas than when they listened to generic stories that weren’t tailored to their interests,” says Olson. “Not only does this tell us how interests affect the brain, but it also shows that personalizing our experimental stimuli can have a profound impact on neuroimaging results.”

The researchers noticed a particularly striking result. “Even though the children listened to completely different stories, their brain activation patterns were more overlapping with their peers when they listened to idiosyncratic stories compared to when they listened to the same generic stories about nature,” says D’Mello. This, she notes, points to how interests can boost both the magnitude and consistency of signals in language regions across subjects without changing how these areas communicate with each other.

 

Individual activation maps from three participants showing increased engagement of language regions for personally interesting versus generic narratives. Image courtesy of the researchers.

Gabrieli noted another finding: “In addition to the stronger engagement of language regions for content of interest, there was also stronger activation in brain regions associated with reward and also with self-reflection.” Personal interests are individually relevant and can be rewarding, potentially driving higher activation in these regions during personalized stories.

These personalized paradigms might be particularly well-suited to studies of the brain in unique or neurodivergent populations. Indeed, the team is already applying these methods to study language in the brains of autistic children.

This study breaks new ground in neuroscience and serves as a prototype for future work that personalizes research to unearth further knowledge of the brain. In doing so, scientists can compile a more complete understanding of the type of information that is processed by specific brain circuits and more fully grasp complex functions such as language.

Season’s Greetings from the McGovern Institute

For this year’s holiday greeting, we asked the McGovern Institute community what comes to mind when they think of the winter holidays. More than 100 words were submitted for the project. The words were fed into ChatGPT to generate our holiday “prediction.” And a text-to-music generator (Udio) converted the words into a holiday song.

With special thanks to Jarrod Hicks and Jamal Williams from the McDermott lab for the inspiration…and to AI for pushing the boundaries of science and imagination.

Video credits:
Jacob Pryor (animation)
JR Narrows, Space Lute (sound design)

Revisiting reinforcement learning

MIT Institute Professor Ann Graybiel. Photo: Justin Knight

Dopamine is a powerful signal in the brain, influencing our moods, motivations, movements, and more. The neurotransmitter is crucial for reward-based learning, a function that may be disrupted in a number of psychiatric conditions, from mood disorders to addiction. Now, researchers led by Ann Graybiel, an investigator at MIT’s McGovern Institute, have found surprising patterns of dopamine signaling that suggest neuroscientists may need to refine their model of how reinforcement learning occurs in the brain. The team’s findings were published October 14, 2024, in the journal Nature Communications.

Dopamine plays a critical role in teaching people and other animals about the cues and behaviors that portend both positive and negative outcomes; the classic example of this type of learning is the dog that Ivan Pavlov trained to anticipate food at the sound of bell. Graybiel explains that according to the standard model of reinforcement learning, when an animal is exposed to a cue paired with a reward, dopamine-producing cells initially fire in response to the reward. As animals learn the association between the cue and the reward, the timing of dopamine release shifts, so it becomes associated with the cue instead of the reward itself.

But with new tools enabling more detailed analyses of when and where dopamine is released in the brain, Graybiel’s team is finding that this model doesn’t completely hold up. The group started picking up clues that the field’s model of reinforcement learning was incomplete more than ten years ago, when Mark Howe, a graduate student in the lab, noticed that the dopamine signals associated with reward were released not in a sudden burst the moment a reward was obtained, but instead before that, building gradually as a rat got closer to its treat. Dopamine might actually be communicating to the rest of the brain the proximity of the reward, they reasoned. “That didn’t fit at all with the standard, canonical model,” Graybiel says.

Dopamine dynamics

As other neuroscientists considered how a model of reinforcement learning could take those findings into account, Graybiel and postdoctoral researcher Min Jung Kim decided it was time to take a closer look at dopamine dynamics.

“We thought, let’s go back to the most basic kind of experiment and start all over again,” Graybiel says.

That meant using sensitive new dopamine sensors to track the neurotransmitter’s release in the brains of mice as they learned to associated a blue light with a satisfying sip of water. The team focused its attention on the striatum, a region within the brain’s basal ganglia, where neurons use dopamine to influence neural circuits involved in a variety of processes, including reward-based learning.

The researchers found that the timing of dopamine release varied in different parts of the striatum. But nowhere did Graybiel’s team find a transition in dopamine release timing from the time of the reward to the time to the cue—the key transition predicted by the standard model of reinforcement learning model.

In the team’s simplest experiments, where every time a mouse saw a light it was paired with a reward, the lateral part of the striatum reliably released dopamine when animals were given their water. This strong response to the reward never diminished, even as the mice learned to expect the reward when they saw a light. In the medial part of the striatum, in contrast, dopamine was never released at the time of the reward. Cells there always fired when a mouse saw the light, even early in the learning process. This was puzzling, Graybiel says, because at the beginning of learning, dopamine would have been predicted to respond to the reward itself.

The patterns of dopamine release became even more unexpected when Graybiel’s team introduced a second light into its experimental setup. The new light, in a different position than the first, did not signal a reward. Mice watched as either light was given as the cue, one at a time, with water accompanying only the original cue.

In these experiments, when the mice saw the reward-associated light, dopamine release went up in the centromedial striatum and surprisingly, stayed up until the reward was delivered. In the lateral part of the region, dopamine also involved a sustained period where signaling plateaued.

Graybiel says she was surprised to see how much dopamine responses changed when the experimenters introduce the second light. The responses to the rewarded light were different when the other light could be shown in other trials, even though the mice saw only one light at a time. “There must be a cognitive aspect to this that comes into play,” she says. “The brain wants to hold onto the information that the cue has come on for a while.” Cells in the striatum seem to achieve this through the sustained dopamine release that continued during the brief delay between the light and the reward in the team’s experiments. Indeed, Graybiel said, while this kind of sustained dopamine release has not previously been linked to reinforcement learning, it is reminiscent of sustained signaling that has been tied to working memory in other parts of the brain.

Reinforcement learning, reconsidered

Ultimately, Graybiel says, “many of our results didn’t fit reinforcement learning models as traditionally—and by now canonically—considered.” That suggests neuroscientists’ understanding of this process will need to evolve as part of the field’s deepening understanding of the brain. “But this is just one step to help us all refine our understanding and to have reformulations of the models of how basal ganglia influence movement and thought and emotion. These reformulations will have to include surprises about the reinforcement learning system vis-á-vis these plateaus, but they could possibly give us insight into how a single experience can linger in this reinforcement-related part of our brains,” she says.

This study was funded by the National Institutes of Health, the William N. & Bernice E. Bumpus Foundation, the Saks Kavanaugh Foundation, the CHDI Foundation, Joan and Jim Schattinger, and Lisa Yang.

Illuminating the architecture of the mind

This story also appears in the Winter 2025 issue of BrainScan

___

McGovern investigator Nancy Kanwisher and her team have big questions about the nature of the human mind. Energized by Kanwisher’s enthusiasm for finding out how and why the brain works as it does, her team collaborates broadly and embraces various tools of neuroscience. But their core discoveries tend to emerge from pictures of the brain in action. For Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, “there’s nothing like looking inside.”

Kanwisher and her colleagues have scanned the brains of hundreds of volunteers using functional magnetic resonance imaging (fMRI). With each scan, they collect a piece of insight into how the brain is organized.

Male and female researchers sitting in an imaging center with an MRI in the background.
Nancy Kanwisher (right), whose unfaltering support for students and trainees has earned her awards for outstanding teaching and mentorship, is now working with research scientist RT Pramod to find the brain’s “physics network.” Photo: Steph Stevens

Recognizing faces

By visualizing the parts of the brain that get involved in various mental activities — and, importantly, which do not — they’ve discovered that certain parts of the brain specialize in surprisingly specific tasks. Earlier this year Kanwisher was awarded the prestigious Kavli Prize in Neuroscience for the discovery of one of these hyper-specific regions: a small spot within the brain’s neocortex that recognizes faces.

Kanwisher found that this region, which she named the fusiform face area (FFA), is highly sensitive to images of faces and appears to be largely uninterested in other objects. Without the FFA, the brain struggles with facial recognition — an impairment seen in patients who have experienced damage to this part of the brain.

Beyond the FFA

Not everything in the brain is so specialized. Many areas participate in a range of cognitive processes, and even the most specialized modules, like the FFA, must work with other brain regions to process and use information. Plus, Kanwisher and her team have tracked brain activity during many functions without finding regions devoted exclusively to those tasks. (There doesn’t appear to be a part of the brain dedicated to recognizing snakes, for example).

Still, work in the Kanwisher lab demonstrates that as a specialized functional module within the brain, the FFA is not unique. In collaboration with McGovern colleagues Josh McDermott and Evelina Fedorenko, the group has found areas devoted to perceiving music and using language. There’s even a region dedicated to thinking about other people’s thoughts, identified by Rebecca Saxe in work she started as a graduate student in Kanwisher’s lab.

Brain with colored blobs.
Kanwisher’s team has found several hyperspecific regions of the brain, including those dedicated to using language (red-orange), perceiving music (yellow), thinking about other people’s thoughts (blue), recognizing bodies (green), and our intuitive sense of physics (teal). (This is an artistic adaptation of Kanwisher’s data.)

Having established these regions’ roles, Kanwisher and her collaborators are now looking at how and why they become so specialized. Meanwhile, the group has also turned its attention to a more complex function that seems to largely take place within a defined network: our intuitive sense of physics.

The brain’s game engine

Early in life, we begin to understand the nature of objects and materials, such as the fact that objects can support but not move through each other. Later, we intuitively understand how it feels to move on a slippery floor, what happens when moving objects collide, and where a tossed ball will fall. “You can’t do anything at all in the world without some understanding of the physics of the world you’re acting on,” Kanwisher says.

Kanwisher says MIT colleague Josh Tenenbaum first sparked her interest in intuitive physical reasoning. Tenenbaum and his students had been arguing that humans understand
the physical world using a simulation system, much like the physics engines that video games use to generate realistic movement and interactions within virtual environments. Kanwisher decided to team up with Tenenbaum to test whether there really is a game engine in the head, and if so, what it computes and represents.

An unstable column of blue and yellow blocks piled on top of a table that is half red, half green.
By asking subjects in an MRI scanner to predict which way this block tower might fall, Kanwisher’s team is zeroing in on the location of the brain’s “physics network.” Image: RT Pramod, Nancy Kanwisher

To find out, Kanwisher and her team have asked volunteers to evaluate various scenarios while in an MRI scanner — some that require physical reasoning and some that do not. They found sizable parts of the brain that participate in physical reasoning tasks but stay quiet during other kinds of thinking.

Research scientist RT Pramod says he was initially skeptical the brain would dedicate special circuitry to the diverse tasks involved in our intuitive sense of physics — but he’s been convinced by the data he’s found. “I see consistent evidence that if you’re reasoning, if you’re thinking, or even if you’re looking at anything sort of “physics-y” about the world, you will see activations in these regions and only in these regions — not anywhere else,” he says.

Pramod’s experiments also show that these regions are called on to make predictions about the physical world. When volunteers watch videos of objects whose trajectories portend a crash — but do not actually depict that crash — it is the physics network that signals what is about to happen. “Only these regions have this information, suggesting that maybe there is some truth to the physics engine hypothesis,” Pramod says.

Kanwisher says she doesn’t expect physical reasoning, which her group has tied to sizable swaths of the brain’s frontal and parietal cortex, to be executed by a module as distinct as the FFA. “It’s not going to be like one hyper-specific region and that’s all that happens there,” she says. “I think ultimately it’s much more interesting than that.”

To figure out what these regions can and cannot do, Kanwisher’s team has broadened the ways in which they ask volunteers to think about physics inside the MRI scanner. So far, Kanwisher says, the group’s tests have focused on rigid objects. But what about soft, squishy ones, or liquids?

A red liquid sloshes inside a clear container.
Kanwisher’s team is exploring whether non-rigid materials, like the liquid in this image, engage the brain’s “physics network” in the same way as rigid objects. Image: Vivian Paulun

Vivian Paulun, a postdoc working jointly with Kanwisher and Tenenbaum, is investigating whether our innate expectations about these kinds of materials occur within the network that they have linked to physical reasoning about rigid objects. Another set of experiments will explore whether we use sounds, like that of a bouncing ball or a screeching car, to predict physics physical events with the same network that interprets visual cues.

Meanwhile, she is also excited about an opportunity to find out what happens when the brain’s physics network is damaged. With collaborators in England, the group plans to find out whether patients in which stroke has affected this part of the brain have specific deficits in physical reasoning.

Probing these questions could reveal fundamental truths about the human mind and intelligence. Pramod points out that it could also help advance artificial intelligence, which so far has been unable to match humans when it comes to physical reasoning. “Inferences that are sort of easy for us are still really difficult for even state-of-the art computer vision,” he says. “If we want to get to a stage where we have really good machine learning algorithms that can interact with the world the way we do, I think we should first understand how the brain does it.”