Rationale engineering generates a compact new tool for gene therapy

Scientists at the McGovern Institute and the Broad Institute of MIT and Harvard have reengineered a compact RNA-guided enzyme they found in bacteria into an efficient, programmable editor of human DNA. The protein they created, called NovaIscB, can be adapted to make precise changes to the genetic code, modulate the activity of specific genes, or carry out other editing tasks. Because its small size simplifies delivery to cells, NovaIscB’s developers say it is a promising candidate for developing gene therapies to treat or prevent disease.

The study was led by McGovern Institute investigator Feng Zhang, who is also the James and Patricia Poitras Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and a core member of the Broad Institute. Zhang and his team reported their work today in the journal Nature Biotechnology.

Compact tools

NovaIscB is derived from a bacterial DNA cutter that belongs to a family of proteins called IscBs, which Zhang’s lab discovered in 2021. IscBs are a type of OMEGA system, the evolutionary ancestors to Cas9, which is part of the bacterial CRISPR system that Zhang and others have developed into powerful genome-editing tools. Like Cas9, IscB enzymes cut DNA at sites specified by an RNA guide. By reprogramming that guide, researchers can redirect the enzymes to target sequences of their choosing.

IscBs had caught the team’s attention not only because they share key features of CRISPR’s DNA-cutting Cas9, but also because they are a third of its size. That would be an advantage for potential gene therapies: Compact tools are easier to deliver to cells, and with a small enzyme, researchers would have more flexibility to tinker, potentially adding new functionalities without creating tools that were too bulky for clinical use.

From their initial studies of IscBs, researchers in Zhang’s lab knew that some members of the family could cut DNA targets in human cells. None of the bacterial proteins worked well enough to be deployed therapeutically, however: The team would have to modify an IscB to ensure it could edit targets in human cells efficiently without disturbing the rest of the genome.

To begin that engineering process, Soumya Kannan, a graduate student in Zhang’s lab who is now a junior fellow at the Harvard Society of Fellows, and postdoctoral fellow Shiyou Zhu first searched for an IscB that would make good starting point. They tested nearly 400 different IscB enzymes that can be found in bacteria. Ten were capable of editing DNA in human cells.

Even the most active of those would need to be enhanced to make it a useful genome editing tool. The challenge would be increasing the enzyme’s activity, but only at the sequences specified by its RNA guide. If the enzyme became more active, but indiscriminately so, it would cut DNA in unintended places. “The key is to balance the improvement of both activity and specificity at the same time,” explains Zhu.

Zhu notes that bacterial IscBs are directed to their target sequences by relatively short RNA guides, which makes it difficult to restrict the enzyme’s activity to a specific part of the genome. If an IscB could be engineered to accommodate a longer guide, it would be less likely to act on sequences beyond its intended target.

To optimize IscB for human genome editing, the team leveraged information that graduate student Han Altae-Tran, who is now a postdoctoral fellow at the University of Washington, had learned about the diversity of bacterial IscBs and how they evolved. For instance, the researchers noted that IscBs that worked in human cells included a segment they called REC, which was absent in other IscBs. They suspected the enzyme might need that segment to interact with the DNA in human cells. When they took a closer look at the region, structural modeling suggested that by slightly expanding part of the protein, REC might also enable IscBs to recognize longer RNA guides.

Based on these observations, the team experimented with swapping in parts of REC domains from different IscBs and Cas9s, evaluating how each change impacted the protein’s function. Guided by their understanding of how IscBs and Cas9s interact with both DNA and their RNA guides, the researchers made additional changes, aiming to optimize both efficiency and specificity.

In the end, they generated a protein they called NovaIscB, which was over 100 times more active in human cells than the IscB they had started with and that had demonstrated good specificity for its targets.

Kannan and Zhu constructed and screened hundreds of new IscBs before arriving at NovaIscB—and every change they made to the original protein was strategic. Their efforts were guided by their team’s knowledge of IscBs’ natural evolution as well as predictions of how each alteration would impact the protein’s structure, made using an artificial intelligence tool called AlphaFold2. Compared to traditional methods of introducing random changes into a protein and screening for their effects, this rational engineering approach greatly accelerated the team’s ability to identify a protein with the features they were looking for.

The team demonstrated that NovaIscB is a good scaffold for a variety of genome editing tools. “It biochemically functions very similarly to Cas9, and that makes it easy to port over tools that were already optimized with the Cas9 scaffold,” Kannan says. With different modifications, the researchers used NovaIscB to replace specific letters of the DNA code in human cells and to change the activity of targeted genes.

Importantly, the NovaIscB-based tools are compact enough to be easily packaged inside a single adeno-associated virus (AAV)—the vector most commonly used to safely deliver gene therapy to patients. Because they are bulkier, tools developed using Cas9 can require a more complicated delivery strategy.

Demonstrating NovaIscB’s potential for therapeutic use, Zhang’s team created a tool called OMEGAoff that adds chemical markers to DNA to dial down the activity of specific genes. They programmed OMEGAoff to repress a gene involved in cholesterol regulation, then used AAV to deliver the system to the livers of mice, leading to lasting reductions in cholesterol levels in the animals’ blood.

The team expects that NovaIscB can be used to target genome editing tools to most human genes, and look forward to seeing how other labs deploy the new technology. They also hope others will adopt their evolution-guided approach to rational protein engineering. “Nature has such diversity and its systems have different advantages and disadvantages,” Zhu says. “By learning about that natural diversity, we can make the systems we are trying to engineer better and better.”

This study was funded in part by the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, Neri Oxman, the Phillips family, and J. and P. Poitras.

Daily mindfulness practice reduces anxiety for autistic adults

Just ten to 15 minutes of mindfulness practice a day led to reduced stress and anxiety for autistic adults who participated in a study led by scientists at MIT’s McGovern Institute. Participants in the study used a free smartphone app to guide their practice, giving them the flexibility to practice when and where they chose.

Mindfulness is a state in which the mind is focused only on the present moment. It is a way of thinking that can be cultivated with practice, often through meditation or breathing exercises—and evidence is accumulating that practicing mindfulness has positive effects on mental health. The new study, reported April 8, 2025, in the journal Mindfulness, adds to that evidence, demonstrating clear benefits for autistic adults.

“Everything you want from this on behalf of somebody you care about happened: reduced reports of anxiety, reduced reports of stress, reduced reports of negative emotions, and increased reports of positive emotions,” says McGovern Investigator John Gabrieli, who led the research with Liron Rozenkrantz, an investigator at the Azrieli Faculty of Medicine at Bar-Ilan University in Israel and a research affiliate in Gabrieli’s lab. “Every measure that we had of well-being moved in significantly in a positive direction,” adds Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

One of the reported benefits of practicing mindfulness is that it can reduce the symptoms of anxiety disorders. This prompted Gabrieli and his colleagues to wonder whether it might benefit adults with autism, who tend to report above average levels of anxiety and stress, which can interfere with daily living and quality of life. As many as 65 percent of autistic adults may also have an anxiety disorder.

Gabrieli adds that the opportunity for autistic adults to practice mindfulness with an app, rather than needing to meet with a teacher or class, seemed particularly promising. “The capacity to do it at your own pace in your own home, or any environment you like, might be good for anybody,” he says. “But maybe especially for people for whom social interactions can sometimes be challenging.”

The research team, including first author Cindy Li, the Autism Recruitment and Outreach Coordinator in Gabrieli’s lab, recruited 89 autistic adults to participate in their study. Those individuals were split into two groups: One would try the mindfulness practice for six weeks, while the others would wait and try the intervention later.

Participants were asked to practice daily using an app called Healthy Minds, which guides participants through seated or active mediations, each lasting 10 to 15 minutes. Participants reported that they found the app easy to use and had little trouble making time for the daily practice.

After six weeks, participants reported significant reductions in anxiety and perceived stress. These changes were not experienced by the wait-list group, which served as a control. However, after their own six weeks of practice, people in the wait-list group reported similar benefits. “We replicated the result almost perfectly. Every positive finding we found with the first sample we found with the second sample,” Gabrieli says.

The researchers followed up with study participants after another six weeks. Almost everyone had discontinued their mindfulness practice—but remarkably, their gains in well-being had persisted. Based on this finding, the team is eager to further explore the long-term effects of mindfulness practice in future studies. “There’s a hypothesis that a benefit of gaining mindfulness skills or habits is they stick with you over time—that they become incorporated in your daily life,” Gabrieli says. “If people are using the approach to being in the present and not dwelling on the past or worrying about the future, that’s what you want most of all. It’s a habit of thought that’s powerful and helpful.”

Even as they plan future studies, the researchers say they are already convinced that mindfulness practice can have clear benefits for autistic adults. “It’s possible mindfulness would be helpful at all kinds of ages,” Gabrieli says. But he points out the need is particularly great for autistic adults, who usually have fewer resources and support than autistic children have access to through their schools. Gabrieli is eager for more people with autism to try the Healthy Minds app. “Having scientifically proven resources for adults who are no longer in school systems might be a valuable thing,” he says.

This research was funded in part by The Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT and the Yang Tan Collective.

A visual pathway in the brain may do more than recognize objects

When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Consistent with this, in the past decade, MIT scientists have found that when computational models of the anatomy of the ventral stream are optimized to solve the task of object recognition, they are remarkably good predictors of the neural activities in the ventral stream.

However, in a new study, MIT researchers have shown that when they train these types of models on spatial tasks instead, the resulting models are also quite good predictors of the ventral stream’s neural activities. This suggests that the ventral stream may not be exclusively optimized for object recognition.

“This leaves wide open the question about what the ventral stream is being optimized for. I think the dominant perspective a lot of people in our field believe is that the ventral stream is optimized for object recognition, but this study provides a new perspective that the ventral stream could be optimized for spatial tasks as well,” says MIT graduate student Yudi Xie.

Xie is the lead author of the study, which will be presented at the International Conference on Learning Representations. Other authors of the paper include Weichen Huang, a visiting student through MIT’s Research Science Institute program; Esther Alter, a software engineer at the MIT Quest for Intelligence; Jeremy Schwartz, a sponsored research technical staff member; Joshua Tenenbaum, a professor of brain and cognitive sciences; and James DiCarlo, the Peter de Florez Professor of Brain and Cognitive Sciences, director of the Quest for Intelligence, and a member of the McGovern Institute for Brain Research at MIT.

Beyond object recognition

When we look at an object, our visual system can not only identify the object, but also determine other features such as its location, its distance from us, and its orientation in space. Since the early 1980s, neuroscientists have hypothesized that the primate visual system is divided into two pathways: the ventral stream, which performs object-recognition tasks, and the dorsal stream, which processes features related to spatial location.

Over the past decade, researchers have worked to model the ventral stream using a type of deep-learning model known as a convolutional neural network (CNN). Researchers can train these models to perform object-recognition tasks by feeding them datasets containing thousands of images along with category labels describing the images.

The state-of-the-art versions of these CNNs have high success rates at categorizing images. Additionally, researchers have found that the internal activations of the models are very similar to the activities of neurons that process visual information in the ventral stream. Furthermore, the more similar these models are to the ventral stream, the better they perform at object-recognition tasks. This has led many researchers to hypothesize that the dominant function of the ventral stream is recognizing objects.

However, experimental studies, especially a study from the DiCarlo lab in 2016, have found that the ventral stream appears to encode spatial features as well. These features include the object’s size, its orientation (how much it is rotated), and its location within the field of view. Based on these studies, the MIT team aimed to investigate whether the ventral stream might serve additional functions beyond object recognition.

“Our central question in this project was, is it possible that we can think about the ventral stream as being optimized for doing these spatial tasks instead of just categorization tasks?” Xie says.

To test this hypothesis, the researchers set out to train a CNN to identify one or more spatial features of an object, including rotation, location, and distance. To train the models, they created a new dataset of synthetic images. These images show objects such as tea kettles or calculators superimposed on different backgrounds, in locations and orientations that are labeled to help the model learn them.

The researchers found that CNNs that were trained on just one of these spatial tasks showed a high level of “neuro-alignment” with the ventral stream — very similar to the levels seen in CNN models trained on object recognition.

The researchers measure neuro-alignment using a technique that DiCarlo’s lab has developed, which involves asking the models, once trained, to predict the neural activity that a particular image would generate in the brain. The researchers found that the better the models performed on the spatial task they had been trained on, the more neuro-alignment they showed.

“I think we cannot assume that the ventral stream is just doing object categorization, because many of these other functions, such as spatial tasks, also can lead to this strong correlation between models’ neuro-alignment and their performance,” Xie says. “Our conclusion is that you can optimize either through categorization or doing these spatial tasks, and they both give you a ventral-stream-like model, based on our current metrics to evaluate neuro-alignment.”

Comparing models

The researchers then investigated why these two approaches — training for object recognition and training for spatial features — led to similar degrees of neuro-alignment. To do that, they performed an analysis known as centered kernel alignment (CKA), which allows them to measure the degree of similarity between representations in different CNNs. This analysis showed that in the early to middle layers of the models, the representations that the models learn are nearly indistinguishable.

“In these early layers, essentially you cannot tell these models apart by just looking at their representations,” Xie says. “It seems like they learn some very similar or unified representation in the early to middle layers, and in the later stages they diverge to support different tasks.”

The researchers hypothesize that even when models are trained to analyze just one feature, they also take into account “non-target” features — those that they are not trained on. When objects have greater variability in non-target features, the models tend to learn representations more similar to those learned by models trained on other tasks. This suggests that the models are using all of the information available to them, which may result in different models coming up with similar representations, the researchers say.

“More non-target variability actually helps the model learn a better representation, instead of learning a representation that’s ignorant of them,” Xie says. “It’s possible that the models, although they’re trained on one target, are simultaneously learning other things due to the variability of these non-target features.”

In future work, the researchers hope to develop new ways to compare different models, in hopes of learning more about how each one develops internal representations of objects based on differences in training tasks and training data.

“There could be still slight differences between these models, even though our current way of measuring how similar these models are to the brain tells us they’re on a very similar level. That suggests maybe there’s still some work to be done to improve upon how we can compare the model to the brain, so that we can better understand what exactly the ventral stream is optimized for,” Xie says.

The research was funded by the Semiconductor Research Corporation and the U.S. Defense Advanced Research Projects Agency.

Twenty-five years after its founding, the McGovern Institute is shaping brain science and improving human lives at a global scale

In 2000, Patrick J. McGovern ’59 and Lore Harp McGovern made an extraordinary gift to establish the McGovern Institute for Brain Research at MIT, driven by their deep curiosity about the human mind and their belief in the power of science to change lives. Their $350 million pledge began with a simple yet audacious vision: to understand the human brain in all its complexity and to leverage that understanding for the betterment of humanity.

Twenty-five years later, the McGovern Institute stands as a testament to the power of interdisciplinary collaboration, continuing to shape our understanding of the brain and improve the quality of life for people worldwide.

In the Beginning

“This is by any measure a truly historic moment for MIT,” said MIT’s 15th President Charles M. Vest during his opening remarks at an event in 2000 to celebrate the McGovern gift agreement. “The creation of the McGovern Institute will launch one of the most profound and important scientific ventures of this century in what surely will be a cornerstone of MIT scientific contributions from the decades ahead.”

Vest tapped Phillip A. Sharp, MIT Institute Professor Emeritus of Biology and Nobel laureate, to lead the institute and appointed six MIT professors — Emilio Bizzi, Martha Constantine-Paton, Ann Graybiel PhD ’71, H. Robert Horvitz ’68, Nancy Kanwisher ’80, PhD ’86, and Tomaso Poggio — to represent its founding faculty.  Construction began in 2003 on Building 46, a 376,000 square foot research complex at the northeastern edge of campus. MIT’s new “gateway from the north” would eventually house the McGovern Institute, the Picower Institute for Learning and Memory, and MIT’s Department of Brain and Cognitive Sciences.

Group photo in front of construction sign.
Patrick J. McGovern ’59 and Lore Harp McGovern gather with faculty members and MIT administration at the groundbreaking of MIT Building 46 in 2003. Photo: Donna Coveney

Robert Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT,  succeeded Sharp as director of the McGovern Institute in 2005, and assembled a distinguished roster of 22 faculty members, including a Nobel laureate, a Breakthrough Prize winner, two National Medal of Science/Technology awardees, and 15 members of the American Academy of Arts and Sciences.

A Quarter Century of Innovation

On April 11, 2025, the McGovern Institute celebrated its 25th anniversary with a half day symposium featuring presentations by MIT Institute Professor Robert Langer, alumni speakers from various McGovern labs, and Desimone, who is in his twentieth year as director of the institute.

Desimone highlighted the institute’s recent discoveries, including the development of the CRISPR genome-editing system, which has culminated in the world’s first CRISPR gene therapy approved for humans — a remarkable achievement that is ushering in a new era of transformative medicine. In other milestones, McGovern researchers developed the first prosthetic limb fully controlled by the body’s nervous system; a flexible probe that taps into gut-brain communication; an expansion microscopy technique that paves the way for biology labs around the world to perform nanoscale imaging; and advanced computational models that demonstrate how we see, hear, use language, and even think about what others are thinking. Equally transformative has been the McGovern Institute’s work in neuroimaging, uncovering the architecture of human thought and establishing markers that signal the early emergence of mental illness, before symptoms even appear.

Synergy and Open Science

“I am often asked what makes us different from other neuroscience institutes and programs around the world,” says Desimone. “My answer is simple. At the McGovern Institute, the whole is greater than the sum of its parts.”

Many discoveries at the McGovern Institute have depended on collaborations across multiple labs, ranging from biological engineering to human brain imaging and artificial intelligence. In modern brain research, significant advances often require the joint expertise of people working in neurophysiology, behavior, computational analysis, neuroanatomy, and molecular biology. More than a dozen different MIT departments are represented by McGovern faculty and graduate students, and this synergy has led to insights and innovations that are far greater than what any single discipline could achieve alone.

Also baked into the McGovern ethos is a spirit of open science, where newly developed technologies are shared with colleagues around the world. Through hospital partnerships for example, McGovern researchers are testing their tools and therapeutic interventions in clinical settings, accelerating their discoveries into real-world solutions.

The McGovern Legacy  

Hundreds of scientific papers have emerged from McGovern labs over the past 25 years, but most faculty would argue that it’s the people, the young researchers, that truly define the McGovern Institute. Award-winning faculty often attract the brightest young minds, but many McGovern faculty also serve as mentors, creating a diverse and vibrant scientific community that is setting the global standard for brain research and its applications. Nancy Kanwisher ’80 PhD ’86, for example, has guided more than 70 doctoral students and postdocs who have gone on to become leading scientists around the world. Three of her former students, Evelina Fedorenko PhD ‘07, Josh McDermott PhD ‘06, and the John W. Jarve (1978) Professor of Brain and Cognitive Sciences, Rebecca Saxe PhD ‘03, are now her colleagues at the McGovern Institute. Other McGovern alumni shared stories of mentorship, science, and real-world impact at the 25th anniversary symposium.

Group photo of four smiling scientists.
Nancy Kanwisher (center) with former students-turned-colleagues Evelina Fedorenko (left), Josh McDermott, and Rebecca Saxe (right). Photo: Steph Stevens

Looking to the future, the McGovern community is more committed than ever to unraveling the mysteries of the brain and making a meaningful difference in lives of individuals at a global scale.

“By promoting team science, open communication, and cross-discipline partnerships,” says institute co-founder Lore Harp McGovern, “our culture demonstrates how individual expertise can be amplified through collective effort. I am honored to be the co-founder of this incredible institution – onward to the next 25 years!”

Looking under the hood at the brain’s language system

As a young girl growing up in the former Soviet Union, Evelina Fedorenko PhD ’07 studied several languages, including English, as her mother hoped that it would give her the chance to eventually move abroad for better opportunities.

Her language studies not only helped her establish a new life in the United States as an adult, but also led to a lifelong interest in linguistics and how the brain processes language. Now an associate professor of brain and cognitive sciences at MIT, Fedorenko studies the brain’s language-processing regions: how they arise, whether they are shared with other mental functions, and how each region contributes to language comprehension and production.

Fedorenko’s early work helped to identify the precise locations of the brain’s language-processing regions, and she has been building on that work to generate insight into how different neuronal populations in those regions implement linguistic computations.

“It took a while to develop the approach and figure out how to quickly and reliably find these regions in individual brains, given this standard problem of the brain being a little different across people,” she says. “Then we just kept going, asking questions like: Does language overlap with other functions that are similar to it? How is the system organized internally? Do different parts of this network do different things? There are dozens and dozens of questions you can ask, and many directions that we have pushed on.”

Among some of the more recent directions, she is exploring how the brain’s language-processing regions develop early in life, through studies of very young children, people with unusual brain architecture, and computational models known as large language models.

From Russia to MIT

Fedorenko grew up in the Russian city of Volgograd, which was then part of the Soviet Union. When the Soviet Union broke up in 1991, her mother, a mechanical engineer, lost her job, and the family struggled to make ends meet.

“It was a really intense and painful time,” Fedorenko recalls. “But one thing that was always very stable for me is that I always had a lot of love, from my parents, my grandparents, and my aunt and uncle. That was really important and gave me the confidence that if I worked hard and had a goal, that I could achieve whatever I dreamed about.”

Fedorenko did work hard in school, studying English, French, German, Polish, and Spanish, and she also participated in math competitions. As a 15-year-old, she spent a year attending high school in Alabama, as part of a program that placed students from the former Soviet Union with American families. She had been thinking about applying to universities in Europe but changed her plans when she realized the American higher education system offered more academic flexibility.

After being admitted to Harvard University with a full scholarship, she returned to the United States in 1998 and earned her bachelor’s degree in psychology and linguistics, while also working multiple jobs to send money home to help her family.

While at Harvard, she also took classes at MIT and ended up deciding to apply to the Institute for graduate school. For her PhD research at MIT, she worked with Ted Gibson, a professor of brain and cognitive sciences, and later, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience. She began by using functional magnetic resonance imaging (fMRI) to study brain regions that appeared to respond preferentially to music, but she soon switched to studying brain responses to language.

She found that working with Kanwisher, who studies the functional organization of the human brain but hadn’t worked much on language before, helped Fedorenko to build a research program free of potential biases baked into some of the early work on language processing in the brain.

“We really kind of started from scratch,” Fedorenko says, “combining the knowledge of language processing I have gained by working with Gibson and the rigorous neuroscience approaches that Kanwisher had developed when studying the visual system.”

After finishing her PhD in 2007, Fedorenko stayed at MIT for a few years as a postdoc funded by the National Institutes of Health, continuing her research with Kanwisher. During that time, she and Kanwisher developed techniques to identify language-processing regions in different people, and discovered new evidence that certain parts of the brain respond selectively to language. Fedorenko then spent five years as a research faculty member at Massachusetts General Hospital, before receiving an offer to join the faculty at MIT in 2019.

How the brain processes language

Since starting her lab at MIT’s McGovern Institute for Brain Research, Fedorenko and her trainees have made several discoveries that have helped to refine neuroscientists’ understanding of the brain’s language-processing regions, which are spread across the left frontal and temporal lobes of the brain.

In a series of studies, her lab showed that these regions are highly selective for language and are not engaged by activities such as listening to music, reading computer code, or interpreting facial expressions, all of which have been argued to be share similarities with language processing.

“We’ve separated the language-processing machinery from various other systems, including the system for general fluid thinking, and the systems for social perception and reasoning, which support the processing of communicative signals, like facial expressions and gestures, and reasoning about others’ beliefs and desires,” Fedorenko says. “So that was a significant finding, that this system really is its own thing.”

More recently, Fedorenko has turned her attention to figuring out, in more detail, the functions of different parts of the language processing network. In one recent study, she identified distinct neuronal populations within these regions that appear to have different temporal windows for processing linguistic content, ranging from just one word up to six words.

She is also studying how language-processing circuits arise in the brain, with ongoing studies in which she and a postdoc in her lab are using fMRI to scan the brains of young children, observing how their language regions behave even before the children have fully learned to speak and understand language.

Large language models (similar to ChatGPT) can help with these types of developmental questions, as the researchers can better control the language inputs to the model and have continuous access to its abilities and representations at different stages of learning.

“You can train models in different ways, on different kinds of language, in different kind of regimens. For example, training on simpler language first and then more complex language, or on language combined with some visual inputs. Then you can look at the performance of these language models on different tasks, and also examine changes in their internal representations across the training trajectory, to test which model best captures the trajectory of human language learning,” Fedorenko says.

To gain another window into how the brain develops language ability, Fedorenko launched the Interesting Brains Project several years ago. Through this project, she is studying people who experienced some type of brain damage early in life, such as a prenatal stroke, or brain deformation as a result of a congenital cyst. In some of these individuals, their conditions destroyed or significantly deformed the brain’s typical language-processing areas, but all of these individuals are cognitively indistinguishable from individuals with typical brains: They still learned to speak and understand language normally, and in some cases, they didn’t even realize that their brains were in some way atypical until they were adults.

“That study is all about plasticity and redundancy in the brain, trying to figure out what brains can cope with, and how” Fedorenko says. “Are there many solutions to build a human mind, even when the neural infrastructure is so different-looking?”

To the brain, Esperanto and Klingon appear the same as English or Mandarin

Within the human brain, a network of regions has evolved to process language. These regions are consistently activated whenever people listen to their native language or any language in which they are proficient.

A new study by MIT researchers finds that this network also responds to languages that are completely invented, such as Esperanto, which was created in the late 1800s as a way to promote international communication, and even to languages made up for television shows such as “Star Trek” and “Game of Thrones.”

To study how the brain responds to these artificial languages, MIT neuroscientists convened nearly 50 speakers of these languages over a single weekend. Using functional magnetic resonance imaging (fMRI), the researchers found that when participants listened to a constructed language in which they were proficient, the same brain regions lit up as those activated when they processed their native language.

“We find that constructed languages very much recruit the same system as natural languages, which suggests that the key feature that is necessary to engage the system may have to do with the kinds of meanings that both kinds of languages can express,” says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The findings help to define some of the key properties of language, the researchers say, and suggest that it’s not necessary for languages to have naturally evolved over a long period of time or to have a large number of speakers.

“It helps us narrow down this question of what a language is, and do it empirically, by testing how our brain responds to stimuli that might or might not be language-like,” says Saima Malik-Moraleda, an MIT postdoc and the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.

Convening the conlang community

Unlike natural languages, which evolve within communities and are shaped over time, constructed languages, or “conlangs,” are typically created by one person who decides what sounds will be used, how to label different concepts, and what the grammatical rules are.

Esperanto, the most widely spoken conlang, was created in 1887 by L.L. Zamenhof, who intended it to be used as a universal language for international communication. Currently, it is estimated that around 60,000 people worldwide are proficient in Esperanto.

In previous work, Fedorenko and her students have found that computer programming languages, such as Python — another type of invented language — do not activate the brain network that is used to process natural language. Instead, people who read computer code rely on the so-called multiple demand network, a brain system that is often recruited for difficult cognitive tasks.

Fedorenko and others have also investigated how the brain responds to other stimuli that share features with language, including music and nonverbal communication such as gestures and facial expressions.

“We spent a lot of time looking at all these various kinds of stimuli, finding again and again that none of them engage the language-processing mechanisms,” Fedorenko says. “So then the question becomes, what is it that natural languages have that none of those other systems do?”

That led the researchers to wonder if artificial languages like Esperanto would be processed more like programming languages or more like natural languages. Similar to programming languages, constructed languages are created by an individual for a specific purpose, without natural evolution within a community. However, unlike programming languages, both conlangs and natural languages can be used to convey meanings about the state of the external world or the speaker’s internal state.

To explore how the brain processes conlangs, the researchers invited speakers of Esperanto and several other constructed languages to MIT for a weekend conference in November 2022. The other languages included Klingon (from “Star Trek”), Na’vi (from “Avatar”), and two languages from “Game of Thrones” (High Valyrian and Dothraki). For all of these languages, there are texts available for people who want to learn the language, and for Esperanto, Klingon, and High Valyrian, there is even a Duolingo app available.

“It was a really fun event where all the communities came to participate, and over a weekend, we collected all the data,” says Malik-Moraleda, who co-led the data collection effort with former MIT postbac Maya Taliaferro, now a PhD student at New York University.

During that event, which also featured talks from several of the conlang creators, the researchers used fMRI to scan 44 conlang speakers as they listened to sentences from the constructed language in which they were proficient. The creators of these languages — who are co-authors on the paper — helped construct the sentences that were presented to the participants.

While in the scanner, the participants also either listened to or read sentences in their native language, and performed some nonlinguistic tasks for comparison. The researchers found that when people listened to a conlang, the same language regions in the brain were activated as when they listened to their native language.

Common features

The findings help to identify some of the key features that are necessary to recruit the brain’s language processing areas, the researchers say. One of the main characteristics driving language responses seems to be the ability to convey meanings about the interior and exterior world — a trait that is shared by natural and constructed languages, but not programming languages.

“All of the languages, both natural and constructed, express meanings related to inner and outer worlds. They refer to objects in the world, to properties of objects, to events,” Fedorenko says. “Whereas programming languages are much more similar to math. A programming language is a symbolic generative system that allows you to express complex meanings, but it’s a self-contained system: The meanings are highly abstract and mostly relational, and not connected to the real world that we experience.”

Some other characteristics of natural languages, which are not shared by constructed languages, don’t seem to be necessary to generate a response in the language network.

“It doesn’t matter whether the language is created and shaped over time by a community of speakers, because these constructed languages are not,” Malik-Moraleda says. “It doesn’t matter how old they are, because conlangs that are just a decade old engage the same brain regions as natural languages that have been around for many hundreds of years.”

To further refine the features of language that activate the brain’s language network, Fedorenko’s lab is now planning to study how the brain responds to a conlang called Lojban, which was created by the Logical Language Group in the 1990s and was designed to prevent ambiguity of meanings and promote more efficient communication.

The research was funded by MIT’s McGovern Institute for Brain Research, Brain and Cognitive Sciences Department, the Simons Center for the Social Brain, the Frederick A. and Carole J. Middleton Career Development Professorship, and the U.S. National Institutes of Health.

Ten years of bigger samples, better views

Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of McGovern investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.

“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.

Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.

“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”

Origins of ExM 

To develop expansion microscopy, Boyden and his team turned to hydrogels: a material with remarkable water-absorbing properties that had already been put to practical use: it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.

After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.

Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers—a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a four-fold expansion.

Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now anybody can go look at the building blocks of life and how they relate to each other.”

Empowering scientists

Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.

It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things–which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.

Longitudinally opened mosquito midguts prepared using MoTissU-ExM. Image: Sabrina Absalon

Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.

Always Improving

Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.

They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less costly diagnoses.

Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now restain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.

Synaptic proteins and their associations to neuronal processes in the mouse primary somatosensory cortex imaged using expansion microscopy. Image: Boyden lab

But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet—but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.

Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now you can get images that look a lot like electron microscopy images, but on regular old light microscopes—the kind that everybody has access to,” Boyden says.

Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California, Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days. (See HHMI video below).

And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”

Expanding Possibilities

Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify—so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.

Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoctoral researcher in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebrafish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.

“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.

His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network—how life really operates,” he says.

Leslie Vosshall awarded the 2025 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2025 Edward M. Scolnick Prize in Neuroscience will be awarded to Leslie Vosshall, the Robin Chemers Neustein Professor at The Rockefeller University and Vice President and Chief Scientific Officer of the Howard Hughes Medical Institute. Vosshall is being recognized for her discovery of the neural mechanisms underlying mosquito host-seeking behavior. The Scolnick Prize is awarded annually by the McGovern Institute for outstanding achievements in neuroscience.

“Leslie Vosshall’s vision to apply decades of scientific know-how in a model insect to bear on one of the greatest human health threats, the mosquito, is awe-inspiring,” says McGovern Institute Director and chair of the selection committee, Robert Desimone. “Vosshall brought together academic and industry scientists to create the first fully annotated genome of the deadly Aedes aegypti mosquito and she became the first to apply powerful CRISPR-Cas9 editing to study this species.”

Vosshall was born in Switzerland, moved to the US as a child and worked throughout high school and college in her uncle’s laboratory, alongside Gerald Weissman, at the Marine Biological Laboratory at Woods Hole. During this time, she published a number of papers on cell aggregation and neutrophil signaling and received a BA in 1987 from Columbia University. She went to graduate school at The Rockefeller University where she first began working on the genetic model organism, the fruit fly Drosophila. Her mentor was Michael Young, who had just recently cloned the circadian rhythm gene period, work for which he later shared the Nobel Prize. Vosshall contributed to this work by showing that the gene timeless is required for rhythmic cycling of the PERIOD protein in and out of a cell’s nucleus and that this is required in only a subset of brain cells to drive circadian behaviors.

For her postdoctoral research, Vosshall returned to Columbia University in 1993 to join the laboratory of Richard Axel, also a future Nobel Laureate. There, Vosshall began her studies of olfaction and was one of the first to clone olfactory receptors in fruit flies. She mapped the expression pattern of each of the fly’s 60 or so olfactory receptors to individual sensory neurons and showed that each sensory neuron has a stereotyped projection into the brain. This work revealed that there is a topological map of brain activity responses for different odorants.

Vosshall started her own laboratory to study the mechanisms of olfaction and olfactory behavior in 2000, at The Rockefeller University. She rose through the ranks to receive tenure in 2006 and full professorship in 2010. Vosshall’s group was initially focused on the classic fruit fly model organism Drosophila but, in 2013, they showed that some of the same molecular mechanisms for olfaction in fruit flies are used by mosquitoes to find human hosts. From that point on, Vosshall rapidly applied her vast expertise in bioengineering to unravel the brain circuits underlying the behavior of the mosquito Aedes aegypti. This mosquito is responsible for transmission of yellow fever, dengue fever, zika fever and more, making it one of the deadliest animals to humankind.

Close-up of mosquito on human skin.
Vosshall identified oils produced by the skin of some people that make them “mosquito magnets.” Photo: Alex Wild

Mosquitoes have evolved to specifically prey on humans and transmit millions of cases of deadly diseases around the globe. Vosshall’s laboratory is filled with mosquitoes in which her team induces various gene mutations to identify the molecular circuits that mosquitoes use to hunt and feed on humans. In 2022, Vosshall received press around the world for identifying oils produced by the skin of some people that make them “mosquito magnets.”  Vosshall further showed that olfactory receptors have an unusual distribution pattern within the antennae of mosquitoes that allow mosquitoes to detect a whole slew of human scents, in addition to their ability to detect human’s warmth and breath. Vosshall’s team has also unraveled the molecular basis for mosquitoes’ avoidance of DEET and identified a novel repellent and identified genes for how they choose where to lay eggs and resist drought. Vosshall’s brilliant application of genome engineering to understand a wide range of mosquito behaviors has profound implications for human health. Moreover, since shifting her research to the mosquito, seven postdoctoral researchers that Vosshall mentored have established their own mosquito research laboratories at Boston University, Columbia University, Yale University, Johns Hopkins University, Princeton University, Florida International University, and the University of British Columbia.

Vosshall’s professional service is remarkable – she has served on innumerable committees at Rockefeller University and has participated in outreach activities around the globe, even starring in the feature length film “The Fly Room.” She began serving as the Vice President and Chief Scientific Officer of HHMI in 2022 and previously served as Associate Director and Director of the Kavli Neural Systems Institute from 2015 to 2021. She has served as an editor for numerous journals, on the Board of Directors for the Helen Hay Whitney Foundation, the McKnight Foundation and more, and co-organized over a dozen conferences. Her achievements have been recognized by the Dickson Prize in Medicine (2024), the Perl-UNC Neuroscience Prize (2022), and the Pradel Research Award (2020). She is an elected member of the National Academy of Medicine, National Academy of Sciences, American Philosophical Society, and American Association for the Advancement of Science.

The McGovern Institute will award the Scolnick Prize to Vosshall on May 9, 2025. At 4:00 pm she will deliver a lecture titled “Mosquitoes: neurobiology of the world’s most dangerous animal” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.

An ancient RNA-guided system could simplify delivery of gene editing therapies

A vast search of natural diversity has led scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox. These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.

These findings are reported online February 27, 2025 in the journal Science.

“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.

“Nature is pretty incredible,” said Zhang who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.

In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.

At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionarily relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.

Zhang’s team discovered >20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region—its TIGR arrays—encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.

Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.

They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact—a quarter of the size Cas9 on average—making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.

Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.

This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, and Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

How nature organizes itself, from brain cells to ecosystems

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity—a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?

A new study from McGovern Associate Investigator Ila Fiete suggests a surprising answer.

In findings published today in Nature, Fiete, a professor of brain and cognitive sciences and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.

Joining two big ideas

“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona, a former graduate student and K. Lisa Yang ICoN Center Graduate Fellow, and postdoctoral associate Sarthak Chandra also led the study.

Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition—small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.

Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.

Modular systems in the brain

The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale—they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.

A visual depiction of two different modules in grid cells, used to map space at slightly different resolutions. Image: Fiete Lab

No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.

Modular systems in nature

The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread and also vary smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries—distinct ecological “neighborhoods” that don’t overlap.

Fiete’s study suggests why: Local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection—and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.

A self-organizing world

One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same, they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.

The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.

Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”