Ten years of bigger samples, better views

Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of McGovern investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.

“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.

Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.

“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”

Origins of ExM 

To develop expansion microscopy, Boyden and his team turned to hydrogels: a material with remarkable water-absorbing properties that had already been put to practical use: it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.

After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.

Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers—a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a four-fold expansion.

Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now anybody can go look at the building blocks of life and how they relate to each other.”

Empowering scientists

Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.

It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things–which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.

Longitudinally opened mosquito midguts prepared using MoTissU-ExM. Image: Sabrina Absalon

Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.

Always Improving

Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.

They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less costly diagnoses.

Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now restain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.

Synaptic proteins and their associations to neuronal processes in the mouse primary somatosensory cortex imaged using expansion microscopy. Image: Boyden lab

But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet—but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.

Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now you can get images that look a lot like electron microscopy images, but on regular old light microscopes—the kind that everybody has access to,” Boyden says.

Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California, Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days. (See HHMI video below).

And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”

Expanding Possibilities

Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify—so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.

Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoctoral researcher in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebrafish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.

“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.

His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network—how life really operates,” he says.

Leslie Vosshall awarded the 2025 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2025 Edward M. Scolnick Prize in Neuroscience will be awarded to Leslie Vosshall, the Robin Chemers Neustein Professor at The Rockefeller University and Vice President and Chief Scientific Officer of the Howard Hughes Medical Institute. Vosshall is being recognized for her discovery of the neural mechanisms underlying mosquito host-seeking behavior. The Scolnick Prize is awarded annually by the McGovern Institute for outstanding achievements in neuroscience.

“Leslie Vosshall’s vision to apply decades of scientific know-how in a model insect to bear on one of the greatest human health threats, the mosquito, is awe-inspiring,” says McGovern Institute Director and chair of the selection committee, Robert Desimone. “Vosshall brought together academic and industry scientists to create the first fully annotated genome of the deadly Aedes aegypti mosquito and she became the first to apply powerful CRISPR-Cas9 editing to study this species.”

Vosshall was born in Switzerland, moved to the US as a child and worked throughout high school and college in her uncle’s laboratory, alongside Gerald Weissman, at the Marine Biological Laboratory at Woods Hole. During this time, she published a number of papers on cell aggregation and neutrophil signaling and received a BA in 1987 from Columbia University. She went to graduate school at The Rockefeller University where she first began working on the genetic model organism, the fruit fly Drosophila. Her mentor was Michael Young, who had just recently cloned the circadian rhythm gene period, work for which he later shared the Nobel Prize. Vosshall contributed to this work by showing that the gene timeless is required for rhythmic cycling of the PERIOD protein in and out of a cell’s nucleus and that this is required in only a subset of brain cells to drive circadian behaviors.

For her postdoctoral research, Vosshall returned to Columbia University in 1993 to join the laboratory of Richard Axel, also a future Nobel Laureate. There, Vosshall began her studies of olfaction and was one of the first to clone olfactory receptors in fruit flies. She mapped the expression pattern of each of the fly’s 60 or so olfactory receptors to individual sensory neurons and showed that each sensory neuron has a stereotyped projection into the brain. This work revealed that there is a topological map of brain activity responses for different odorants.

Vosshall started her own laboratory to study the mechanisms of olfaction and olfactory behavior in 2000, at The Rockefeller University. She rose through the ranks to receive tenure in 2006 and full professorship in 2010. Vosshall’s group was initially focused on the classic fruit fly model organism Drosophila but, in 2013, they showed that some of the same molecular mechanisms for olfaction in fruit flies are used by mosquitoes to find human hosts. From that point on, Vosshall rapidly applied her vast expertise in bioengineering to unravel the brain circuits underlying the behavior of the mosquito Aedes aegypti. This mosquito is responsible for transmission of yellow fever, dengue fever, zika fever and more, making it one of the deadliest animals to humankind.

Close-up of mosquito on human skin.
Vosshall identified oils produced by the skin of some people that make them “mosquito magnets.” Photo: Alex Wild

Mosquitoes have evolved to specifically prey on humans and transmit millions of cases of deadly diseases around the globe. Vosshall’s laboratory is filled with mosquitoes in which her team induces various gene mutations to identify the molecular circuits that mosquitoes use to hunt and feed on humans. In 2022, Vosshall received press around the world for identifying oils produced by the skin of some people that make them “mosquito magnets.”  Vosshall further showed that olfactory receptors have an unusual distribution pattern within the antennae of mosquitoes that allow mosquitoes to detect a whole slew of human scents, in addition to their ability to detect human’s warmth and breath. Vosshall’s team has also unraveled the molecular basis for mosquitoes’ avoidance of DEET and identified a novel repellent and identified genes for how they choose where to lay eggs and resist drought. Vosshall’s brilliant application of genome engineering to understand a wide range of mosquito behaviors has profound implications for human health. Moreover, since shifting her research to the mosquito, seven postdoctoral researchers that Vosshall mentored have established their own mosquito research laboratories at Boston University, Columbia University, Yale University, Johns Hopkins University, Princeton University, Florida International University, and the University of British Columbia.

Vosshall’s professional service is remarkable – she has served on innumerable committees at Rockefeller University and has participated in outreach activities around the globe, even starring in the feature length film “The Fly Room.” She began serving as the Vice President and Chief Scientific Officer of HHMI in 2022 and previously served as Associate Director and Director of the Kavli Neural Systems Institute from 2015 to 2021. She has served as an editor for numerous journals, on the Board of Directors for the Helen Hay Whitney Foundation, the McKnight Foundation and more, and co-organized over a dozen conferences. Her achievements have been recognized by the Dickson Prize in Medicine (2024), the Perl-UNC Neuroscience Prize (2022), and the Pradel Research Award (2020). She is an elected member of the National Academy of Medicine, National Academy of Sciences, American Philosophical Society, and American Association for the Advancement of Science.

The McGovern Institute will award the Scolnick Prize to Vosshall on May 9, 2025. At 4:00 pm she will deliver a lecture titled “Mosquitoes: neurobiology of the world’s most dangerous animal” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.

An ancient RNA-guided system could simplify delivery of gene editing therapies

A vast search of natural diversity has led scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox. These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.

These findings are reported online February 27, 2025 in the journal Science.

“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.

“Nature is pretty incredible,” said Zhang who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.

In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.

At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionarily relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.

Zhang’s team discovered >20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region—its TIGR arrays—encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.

Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.

They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact—a quarter of the size Cas9 on average—making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.

Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.

This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, and Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

How nature organizes itself, from brain cells to ecosystems

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity—a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?

A new study from McGovern Associate Investigator Ila Fiete suggests a surprising answer.

In findings published today in Nature, Fiete, a professor of brain and cognitive sciences and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.

Joining two big ideas

“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona, a former graduate student and K. Lisa Yang ICoN Center Graduate Fellow, and postdoctoral associate Sarthak Chandra also led the study.

Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition—small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.

Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.

Modular systems in the brain

The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale—they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.

A visual depiction of two different modules in grid cells, used to map space at slightly different resolutions. Image: Fiete Lab

No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.

Modular systems in nature

The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread and also vary smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries—distinct ecological “neighborhoods” that don’t overlap.

Fiete’s study suggests why: Local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection—and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.

A self-organizing world

One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same, they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.

The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.

Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”

Seeing more in expansion microscopy

In biology, seeing can lead to understanding, and researchers in Edward Boyden’s lab at MIT’s McGovern Institute are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy—a high-resolution imaging technique the group introduced in 2015—so researchers everywhere can see more when they look at cells and tissues under a light microscope.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT.  “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.

With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.

Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.

Since first developing expansion microscopy, Boyden and his team have continued to enhance the method—increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.

Visualizing cell membranes

One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the February 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.

Tay Shin, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”

The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.

After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.

Traceability of umExM. 3D rendering of 20 manually traced and reconstructed myelinated axons in the corpus callosum. Image: Ed Boyden

Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.

Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.

Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.

One sample, many proteins

To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.

A key to the new method, reported November 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoctoral fellow Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.

After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.

Expansion microscopy lets biologists visualize some of cells’ tiniest features—but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.

“Here’s one of the most famous receptors in all of neuroscience, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience.” – Ed Boyden

To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.

The team used multiExR to look at amyloid plaques—the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease—but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.

Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the US Army, Cancer Research UK, the New York Stem Cell Foundation, the National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures. Samsung, MathWorks, the Collamore-Rogers Fellowship, the National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.

Evelina Fedorenko receives Troland Award from National Academy of Sciences

The National Academy of Sciences (NAS) announced today that McGovern Investigator Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions towards understanding the language network in the human brain.

The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.

Two women and one child looking at a computer screen.
McGovern Investigator Ev Fedorenko (center) looks at a young subject’s brain scan in the Martinos Imaging Center at MIT. Photo: Alexandra Sokhina

Fedorenko, who is an associate professor of brain and cognitive sciences at MIT, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems.  Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.

Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.

She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.

A probabilistic atlas of the human language network based on >800 individuals (center) and sample individual language networks, which illustrate inter-individual variability in the precise locations and shapes of the language areas. Image: Ev Fedorenko

Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington DC.

 

 

 

Scientists engineer CRISPR enzymes that evade the immune system

The core components of CRISPR-based genome-editing therapies are bacterial proteins called nucleases that can stimulate unwanted immune responses in people, increasing the chances of side effects and making these therapies potentially less effective.

Researchers at the Broad Institute of MIT and Harvard and Cyrus Biotechnology have now engineered two CRISPR nucleases, Cas9 and Cas12, to mask them from the immune system. The team identified protein sequences on each nuclease that trigger the immune system and used computational modeling to design new versions that evade immune recognition. The engineered enzymes had similar gene-editing efficiency and reduced immune responses compared to standard nucleases in mice.

Appearing today in Nature Communications, the findings could help pave the way for safer, more efficient gene therapies. The study was led by Feng Zhang, a core institute member at the Broad and an Investigator at the McGovern Institute for Brain Research at MIT.

“As CRISPR therapies enter the clinic, there is a growing need to ensure that these tools are as safe as possible, and this work tackles one aspect of that challenge,” said Zhang, who is also a co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, the James and Patricia Poitras Professor of Neuroscience, and a professor at MIT. He is an Investigator at the Howard Hughes Medical Institute.

Rumya Raghavan, a graduate student in Zhang’s lab when the study began, and Mirco Julian Friedrich, a postdoctoral scholar in Zhang’s lab, were co-first authors on the study.

“People have known for a while that Cas9 causes an immune response, but we wanted to pinpoint which parts of the protein were being recognized by the immune system and then engineer the proteins to get rid of those parts while retaining its function,” said Raghavan.

“Our goal was to use this information to create not only a safer therapy, but one that is potentially even more effective because it is not being eliminated by the immune system before it can do its job,” added Friedrich.

In search of immune triggers

Many CRISPR-based therapies use nucleases derived from bacteria. About 80 percent of people have pre-existing immunity to these proteins through everyday exposure to these bacteria, but scientists didn’t know which parts of the nucleases the immune system recognized.

To find out, Zhang’s team used a specialized type of mass spectrometry to identify and analyze the Cas9 and Cas 12 protein fragments recognized by immune cells. For each of two nucleases — Cas9 from Streptococcus pyogenes and Cas12 from Staphylococcus aureus — they identified three short sequences, about eight amino acids long, that evoked an immune response. They then partnered with Cyrus Biotechnology, a company co-founded by University of Washington biochemist David Baker that develops structure-based computational tools to design proteins that evade the immune response. After Zhang’s team identified immunogenic sequences in Cas9 and Cas12, Cyrus used these computational approaches to design versions of the nucleases that did not include the immune-triggering sequences.

Zhang’s lab used prediction software to validate that the new nucleases were less likely to trigger immune responses. Next, the team engineered a panel of new nucleases informed by these predictions and tested the most promising candidates in human cells and in mice that were genetically modified to bear key components of the human immune system. In both cases, they found that the engineered enzymes resulted in significantly reduced immune responses compared to the original nucleases, but still cut DNA at the same efficiency.

Minimally immunogenic nucleases are just one part of safer gene therapies, Zhang’s team says. In the future, they hope their methods may also help scientists design delivery vehicles to evade the immune system.

This study was funded in part by the Poitras Center for Psychiatric Disorders Research, the K. Lisa. Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience and the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT.

For healthy hearing, timing matters

When soundwaves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Seated man, smiling at camera
McGovern Institute Associate Investigator Josh McDermott. Photo: Justin Knight

Neurons send signals by emitting spikes—brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming soundwaves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The findings, reported December 4, 2024, in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. McGovern Investigator Josh McDermott, who led the research, explains that his team’s models better equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Soundwaves oscillate at rates that determine their pitch: low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that corresponds to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also an associate professor of brain and cognitive sciences at MIT.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally: Animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well—better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

“The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors.” – Josh McDermott

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways—it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”

Personal interests can influence how children’s brains respond to language

A new study from the McGovern Institute shows how interests can modulate language processing in children’s brains and paves the way for personalized brain research.

The paper, which appears in Imaging Neuroscience, was conducted in the lab of McGovern Institute Investigator John Gabrieli, and led by senior author Anila D’Mello, a former McGovern postdoctoral fellow and current assistant professor at the University of Texas Southwestern Medical Center and the University of Texas at Dallas.

“Traditional studies give subjects identical stimuli to avoid confounding the results,” says Gabrieli, who is also the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“However, our research tailored stimuli to each child’s interest, eliciting stronger—and more consistent—activity patterns in the brain’s language regions across individuals.” – John Gabrieli

Funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research in MIT’s Yang Tan Collective, this work unveils a new paradigm that challenges current methods and shows how personalization can be a powerful strategy in neuroscience. The paper’s co-first authors are Halie Olson, a postdoctoral associate at the McGovern Institute, and Kristina Johnson, an assistant professor at Northeastern University and former doctoral student at the MIT Media Lab. “Our research integrates participants’ lived experiences into the study design,” says Johnson. “This approach not only enhances the validity of our findings but also captures the diversity of individual perspectives, often overlooked in traditional research.”

Taking interest into account

When it comes to language, our interests are like operators behind the switchboard. They guide what we talk about and who we talk to. Research suggests that interests are also potent motivators and can help improve language skills. For instance, children score higher on reading tests when the material covers topics that are interesting to them.

But neuroscience has shied away from using personal interests to study the brain, especially in the realm of language. This is mainly because interests, which vary between people, could throw a wrench into experimental control—a core principle that drives scientists to limit factors that can muddle the results.

Gabrieli, D’Mello, Olson, and Johnson ventured into this unexplored territory. The team wondered if tailoring language stimuli to children’s interests might lead to higher responses in language regions of the brain. “Our study is unique in its approach to control the kind of brain activity our experiments yield, rather than control the stimuli we give subjects,” says D’Mello. “This stands in stark contrast to most neuroimaging studies that control the stimuli but might introduce differences in each subject’s level of interest in the material.”

Three women posing for photo with brain images in background.
Researchers Halie Olson (left), Kristina Johnson (center), and Anila D’Mello (right). Photo: Caitlin Cunningham

In their recent study, the authors recruited a cohort of 20 children to investigate how personal interests affected the way the brain processes language. Caregivers described their child’s interests to the researchers, spanning baseball, train lines, Minecraft, and musicals. During the study, children listened to audio stories tuned to their unique interests. They were also presented with audio stories about nature (this was not an interest among the children) for comparison. To capture brain activity patterns, the team used functional magnetic resonance imaging (fMRI), which measures changes in blood flow caused by underlying neural activity.

New insights into the brain

“We found that, when children listened to stories about topics they were really interested in, they showed stronger neural responses in language areas than when they listened to generic stories that weren’t tailored to their interests,” says Olson. “Not only does this tell us how interests affect the brain, but it also shows that personalizing our experimental stimuli can have a profound impact on neuroimaging results.”

The researchers noticed a particularly striking result. “Even though the children listened to completely different stories, their brain activation patterns were more overlapping with their peers when they listened to idiosyncratic stories compared to when they listened to the same generic stories about nature,” says D’Mello. This, she notes, points to how interests can boost both the magnitude and consistency of signals in language regions across subjects without changing how these areas communicate with each other.

 

Individual activation maps from three participants showing increased engagement of language regions for personally interesting versus generic narratives. Image courtesy of the researchers.

Gabrieli noted another finding: “In addition to the stronger engagement of language regions for content of interest, there was also stronger activation in brain regions associated with reward and also with self-reflection.” Personal interests are individually relevant and can be rewarding, potentially driving higher activation in these regions during personalized stories.

These personalized paradigms might be particularly well-suited to studies of the brain in unique or neurodivergent populations. Indeed, the team is already applying these methods to study language in the brains of autistic children.

This study breaks new ground in neuroscience and serves as a prototype for future work that personalizes research to unearth further knowledge of the brain. In doing so, scientists can compile a more complete understanding of the type of information that is processed by specific brain circuits and more fully grasp complex functions such as language.

Season’s Greetings from the McGovern Institute

For this year’s holiday greeting, we asked the McGovern Institute community what comes to mind when they think of the winter holidays. More than 100 words were submitted for the project. The words were fed into ChatGPT to generate our holiday “prediction.” And a text-to-music generator (Udio) converted the words into a holiday song.

With special thanks to Jarrod Hicks and Jamal Williams from the McDermott lab for the inspiration…and to AI for pushing the boundaries of science and imagination.

Video credits:
Jacob Pryor (animation)
JR Narrows, Space Lute (sound design)