Ten years of bigger samples, better views

Nearly 150 years ago, scientists began to imagine how information might flow through the brain based on the shapes of neurons they had seen under the microscopes of the time. With today’s imaging technologies, scientists can zoom in much further, seeing the tiny synapses through which neurons communicate with one another and even the molecules the cells use to relay their messages. These inside views can spark new ideas about how healthy brains work and reveal important changes that contribute to disease.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

This sharper view of biology is not just about the advances that have made microscopes more powerful than ever before. Using methodology developed in the lab of McGovern investigator Edward Boyden, researchers around the world are imaging samples that have been swollen to as much as 20 times their original size so their finest features can be seen more clearly.

“It’s a very different way to do microscopy,” says Boyden, who is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT. “In contrast to the last 300 years of bioimaging, where you use a lens to magnify an image of light from an object, we physically magnify objects themselves.” Once a tissue is expanded, Boyden says, researchers can see more even with widely available, conventional microscopy hardware.

Boyden’s team introduced this approach, which they named expansion microscopy (ExM), in 2015. Since then, they have been refining the method and adding to its capabilities, while researchers at MIT and beyond deploy it to learn about life on the smallest of scales.

“It’s spreading very rapidly throughout biology and medicine,” Boyden says. “It’s being applied to kidney disease, the fruit fly brain, plant seeds, the microbiome, Alzheimer’s disease, viruses, and more.”

Origins of ExM 

To develop expansion microscopy, Boyden and his team turned to hydrogels: a material with remarkable water-absorbing properties that had already been put to practical use: it’s layered inside disposable diapers to keep babies dry. Boyden’s lab hypothesized that hydrogels could retain their structure while they absorbed hundreds of times their original weight in water, expanding the space between their chemical components as they swell.

After some experimentation, Boyden’s team settled on four key steps to enlarging tissue samples for better imaging. First, the tissue must be infused with a hydrogel. Components of the tissue, biomolecules, are anchored to the gel’s web-like matrix, linking them directly to the molecules that make up the gel. Then the tissue is chemically softened and water is added. As the hydrogel absorbs the water, it swells and the tissue expands, growing evenly so the relative positions of its components are preserved.

Boyden and graduate students Fei Chen and Paul Tillberg’s first report on expansion microscopy was published in the journal Science in 2015. In it, the team demonstrated that by spreading apart molecules that had been crowded inside cells, features that would have blurred together under a standard light microscope became separate and distinct. Light microscopes can discriminate between objects that are separated by about 300 nanometers—a limit imposed by the laws of physics. With expansion microscopy, Boyden’s group reported an effective resolution of about 70 nanometers, for a four-fold expansion.

Boyden says this is a level of clarity that biologists need. “Biology is fundamentally, in the end, a nanoscale science,” he says. “Biomolecules are nanoscale, and the interactions between biomolecules are over nanoscale distances. Many of the most important problems in biology and medicine involve nanoscale questions.” Several kinds of sophisticated microscopes, each with their own advantages and disadvantages, can bring this kind of detail to light. But those methods are costly and require specialized skills, making them inaccessible for most researchers. “Expansion microscopy democratizes nanoimaging,” Boyden says. “Now anybody can go look at the building blocks of life and how they relate to each other.”

Empowering scientists

Since Boyden’s team introduced expansion microscopy in 2015, research groups around the world have published hundreds of papers reporting on discoveries they have made using expansion microscopy. For neuroscientists, the technique has lit up the intricacies of elaborate neural circuits, exposed how particular proteins organize themselves at and across synapses to facilitate communication between neurons, and uncovered changes associated with aging and disease.

It has been equally empowering for studies beyond the brain. Sabrina Absalon uses expansion microscopy every week in her lab at Indiana University School of Medicine to study the malaria parasite, a single-celled organism packed with specialized structures that enable it to infect and live inside its hosts. The parasite is so small, most of those structures can’t be seen with ordinary light microscopy. “So as a cell biologist, I’m losing the biggest tool to infer protein function, organelle architecture, morphology, linked to function, and all those things–which is my eye,” she says. With expansion, she can not only see the organelles inside a malaria parasite, she can watch them assemble and follow what happens to them when the parasite divides. Understanding those processes, she says, could help drug developers find new ways to interfere with the parasite’s life cycle.

Longitudinally opened mosquito midguts prepared using MoTissU-ExM. Image: Sabrina Absalon

Absalon adds that the accessibility of expansion microscopy is particularly important in the field of parasitology, where a lot of research is happening in parts of the world where resources are limited. Workshops and training programs in Africa, South America, and Asia are ensuring the technology reaches scientists whose communities are directly impacted by malaria and other parasites. “Now they can get super-resolution imaging without very fancy equipment,” Absalon says.

Always Improving

Since 2015, Boyden’s interdisciplinary lab group has found a variety of creative ways to improve expansion microscopy and use it in new ways. Their standard technique today enables better labeling, bigger expansion factors, and higher resolution imaging. Cellular features less than 20 nanometers from one another can now be separated enough to appear distinct under a light microscope.

They’ve also adapted their protocols to work with a range of important sample types, from entire roundworms (popular among neuroscientists, developmental biologists, and other researchers) to clinical samples. In the latter regard, they’ve shown that expansion can help reveal subtle signs of disease, which could enable earlier or less costly diagnoses.

Originally, the group optimized its protocol for visualizing proteins inside cells, by labeling proteins of interest and anchoring them to the hydrogel prior to expansion. With a new way of processing samples, users can now restain their expanded samples with new labels for multiple rounds of imaging, so they can pinpoint the positions of dozens of different proteins in the same tissue. That means researchers can visualize how molecules are organized with respect to one another and how they might interact, or survey large sets of proteins to see, for example, what changes with disease.

Synaptic proteins and their associations to neuronal processes in the mouse primary somatosensory cortex imaged using expansion microscopy. Image: Boyden lab

But better views of proteins were just the beginning for expansion microscopy. “We want to see everything,” Boyden says. “We’d love to see every biomolecule there is, with precision down to atomic scale.” They’re not there yet—but with new probes and modified procedures, it’s now possible to see not just proteins, but also RNA and lipids in expanded tissue samples.

Labeling lipids, including those that form the membranes surrounding cells, means researchers can now see clear outlines of cells in expanded tissues. With the enhanced resolution afforded by expansion, even the slender projections of neurons can be traced through an image. Typically, researchers have relied on electron microscopy, which generates exquisitely detailed pictures but requires expensive equipment, to map the brain’s circuitry. “Now you can get images that look a lot like electron microscopy images, but on regular old light microscopes—the kind that everybody has access to,” Boyden says.

Boyden says expansion can be powerful in combination with other cutting-edge tools. When expanded samples are used with an ultra-fast imaging method developed by Eric Betzig, an HHMI investigator at the University of California, Berkeley, called lattice light-sheet microscopy, the entire brain of a fruit fly can be imaged at high resolution in just a few days. (See HHMI video below).

And when RNA molecules are anchored within a hydrogel network and then sequenced in place, scientists can see exactly where inside cells the instructions for building specific proteins are positioned, which Boyden’s team demonstrated in a collaboration with Harvard University geneticist George Church and then-MIT-professor Aviv Regev. “Expansion basically upgrades many other technologies’ resolutions,” Boyden says. “You’re doing mass-spec imaging, X-ray imaging, or Raman imaging? Expansion just improved your instrument.”

Expanding Possibilities

Ten years past the first demonstration of expansion microscopy’s power, Boyden and his team are committed to continuing to make expansion microscopy more powerful. “We want to optimize it for different kinds of problems, and making technologies faster, better, and cheaper is always important,” he says. But the future of expansion microscopy will be propelled by innovators outside the Boyden lab, too. “Expansion is not only easy to do, it’s easy to modify—so lots of other people are improving expansion in collaboration with us, or even on their own,” Boyden says.

Boyden points to a group led by Silvio Rizzoli at the University Medical Center Göttingen in Germany that, collaborating with Boyden, has adapted the expansion protocol to discern the physical shapes of proteins. At the Korea Advanced Institute of Science and Technology, researchers led by Jae-Byum Chang, a former postdoctoral researcher in Boyden’s group, have worked out how to expand entire bodies of mouse embryos and young zebrafish, collaborating with Boyden to set the stage for examining developmental processes and long-distance neural connections with a new level of detail. And mapping connections within the brain’s dense neural circuits could become easier with light-microscopy based connectomics, an approach developed by Johann Danzl and colleagues at the Institute of Science and Technology in Austria that takes advantage of both the high resolution and molecular information that expansion microscopy can reveal.

“The beauty of expansion is that it lets you see a biological system down to its smallest building blocks,” Boyden says.

His team is intent on pushing the method to its physical limits, and anticipates new opportunities for discovery as they do. “If you can map the brain or any biological system at the level of individual molecules, you might be able to see how they all work together as a network—how life really operates,” he says.

Leslie Vosshall awarded the 2025 Scolnick Prize in Neuroscience

Today the McGovern Institute at MIT announces that the 2025 Edward M. Scolnick Prize in Neuroscience will be awarded to Leslie Vosshall, the Robin Chemers Neustein Professor at The Rockefeller University and Vice President and Chief Scientific Officer of the Howard Hughes Medical Institute. Vosshall is being recognized for her discovery of the neural mechanisms underlying mosquito host-seeking behavior. The Scolnick Prize is awarded annually by the McGovern Institute for outstanding achievements in neuroscience.

“Leslie Vosshall’s vision to apply decades of scientific know-how in a model insect to bear on one of the greatest human health threats, the mosquito, is awe-inspiring,” says McGovern Institute Director and chair of the selection committee, Robert Desimone. “Vosshall brought together academic and industry scientists to create the first fully annotated genome of the deadly Aedes aegypti mosquito and she became the first to apply powerful CRISPR-Cas9 editing to study this species.”

Vosshall was born in Switzerland, moved to the US as a child and worked throughout high school and college in her uncle’s laboratory, alongside Gerald Weissman, at the Marine Biological Laboratory at Woods Hole. During this time, she published a number of papers on cell aggregation and neutrophil signaling and received a BA in 1987 from Columbia University. She went to graduate school at The Rockefeller University where she first began working on the genetic model organism, the fruit fly Drosophila. Her mentor was Michael Young, who had just recently cloned the circadian rhythm gene period, work for which he later shared the Nobel Prize. Vosshall contributed to this work by showing that the gene timeless is required for rhythmic cycling of the PERIOD protein in and out of a cell’s nucleus and that this is required in only a subset of brain cells to drive circadian behaviors.

For her postdoctoral research, Vosshall returned to Columbia University in 1993 to join the laboratory of Richard Axel, also a future Nobel Laureate. There, Vosshall began her studies of olfaction and was one of the first to clone olfactory receptors in fruit flies. She mapped the expression pattern of each of the fly’s 60 or so olfactory receptors to individual sensory neurons and showed that each sensory neuron has a stereotyped projection into the brain. This work revealed that there is a topological map of brain activity responses for different odorants.

Vosshall started her own laboratory to study the mechanisms of olfaction and olfactory behavior in 2000, at The Rockefeller University. She rose through the ranks to receive tenure in 2006 and full professorship in 2010. Vosshall’s group was initially focused on the classic fruit fly model organism Drosophila but, in 2013, they showed that some of the same molecular mechanisms for olfaction in fruit flies are used by mosquitoes to find human hosts. From that point on, Vosshall rapidly applied her vast expertise in bioengineering to unravel the brain circuits underlying the behavior of the mosquito Aedes aegypti. This mosquito is responsible for transmission of yellow fever, dengue fever, zika fever and more, making it one of the deadliest animals to humankind.

Close-up of mosquito on human skin.
Vosshall identified oils produced by the skin of some people that make them “mosquito magnets.” Photo: Alex Wild

Mosquitoes have evolved to specifically prey on humans and transmit millions of cases of deadly diseases around the globe. Vosshall’s laboratory is filled with mosquitoes in which her team induces various gene mutations to identify the molecular circuits that mosquitoes use to hunt and feed on humans. In 2022, Vosshall received press around the world for identifying oils produced by the skin of some people that make them “mosquito magnets.”  Vosshall further showed that olfactory receptors have an unusual distribution pattern within the antennae of mosquitoes that allow mosquitoes to detect a whole slew of human scents, in addition to their ability to detect human’s warmth and breath. Vosshall’s team has also unraveled the molecular basis for mosquitoes’ avoidance of DEET and identified a novel repellent and identified genes for how they choose where to lay eggs and resist drought. Vosshall’s brilliant application of genome engineering to understand a wide range of mosquito behaviors has profound implications for human health. Moreover, since shifting her research to the mosquito, seven postdoctoral researchers that Vosshall mentored have established their own mosquito research laboratories at Boston University, Columbia University, Yale University, Johns Hopkins University, Princeton University, Florida International University, and the University of British Columbia.

Vosshall’s professional service is remarkable – she has served on innumerable committees at Rockefeller University and has participated in outreach activities around the globe, even starring in the feature length film “The Fly Room.” She began serving as the Vice President and Chief Scientific Officer of HHMI in 2022 and previously served as Associate Director and Director of the Kavli Neural Systems Institute from 2015 to 2021. She has served as an editor for numerous journals, on the Board of Directors for the Helen Hay Whitney Foundation, the McKnight Foundation and more, and co-organized over a dozen conferences. Her achievements have been recognized by the Dickson Prize in Medicine (2024), the Perl-UNC Neuroscience Prize (2022), and the Pradel Research Award (2020). She is an elected member of the National Academy of Medicine, National Academy of Sciences, American Philosophical Society, and American Association for the Advancement of Science.

The McGovern Institute will award the Scolnick Prize to Vosshall on May 9, 2025. At 4:00 pm she will deliver a lecture titled “Mosquitoes: neurobiology of the world’s most dangerous animal” to be followed by a reception at the McGovern Institute, 43 Vassar Street (building 46, room 3002) in Cambridge. The event is free and open to the public.

An ancient RNA-guided system could simplify delivery of gene editing therapies

A vast search of natural diversity has led scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard to uncover ancient systems with potential to expand the genome editing toolbox. These systems, which the researchers call TIGR (Tandem Interspaced Guide RNA) systems, use RNA to guide them to specific sites on DNA. TIGR systems can be reprogrammed to target any DNA sequence of interest, and they have distinct functional modules that can act on the targeted DNA. In addition to its modularity, TIGR is very compact compared to other RNA-guided systems, like CRISPR, which is a major advantage for delivering it in a therapeutic context.

These findings are reported online February 27, 2025 in the journal Science.

“This is a very versatile RNA-guided system with a lot of diverse functionalities,” says Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT who led the research. The TIGR-associated (Tas) proteins that Zhang’s team found share a characteristic RNA-binding component that interacts with an RNA guide that directs it to a specific site in the genome. Some cut the DNA at that site, using an adjacent DNA-cutting segment of the protein. That modularity could facilitate tool development, allowing researchers to swap useful new features into natural Tas proteins.

“Nature is pretty incredible,” said Zhang who is also an investigator at the McGovern Institute and the Howard Hughes Medical Institute, a core member of the Broad Institute, a professor of brain and cognitive sciences and biological engineering at MIT, and co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT. “It’s got a tremendous amount of diversity, and we have been exploring that natural diversity to find new biological mechanisms and harnessing them for different applications to manipulate biological processes,” he says. Previously, Zhang’s team adapted bacterial CRISPR systems into gene editing tools that have transformed modern biology. His team has also found a variety of programmable proteins, both from CRISPR systems and beyond.

In their new work, to find novel programmable systems, the team began by zeroing in a structural feature of the CRISPR Cas9 protein that binds to the enzyme’s RNA guide. That is a key feature that has made Cas9 such a powerful tool: “Being RNA-guided makes it relatively easy to reprogram, because we know how RNA binds to other DNA or other RNA,” Zhang explains. His team searched hundreds of millions of biological proteins with known or predicted structures, looking for any that shared a similar domain. To find more distantly related proteins, they used an iterative process: from Cas9, they identified a protein called IS110, which had previously been shown by others to bind RNA. They then zeroed in on the structural features of IS110 that enable RNA binding and repeated their search.

At this point, the search had turned up so many distantly related proteins that they team turned to artificial intelligence to make sense of the list. “When you are doing iterative, deep mining, the resulting hits can be so diverse that they are difficult to analyze using standard phylogenetic methods, which rely on conserved sequence,” explains Guilhem Faure, a computational biologist in Zhang’s lab. With a protein large language model, the team was able to cluster the proteins they had found into groups according to their likely evolutionarily relationships. One group set apart from the rest, and its members were particularly intriguing because they were encoded by genes with regularly spaced repetitive sequences reminiscent of an essential component of CRISPR systems. These were the TIGR-Tas systems.

Zhang’s team discovered >20,000 different Tas proteins, mostly occurring in bacteria-infecting viruses. Sequences within each gene’s repetitive region—its TIGR arrays—encode an RNA guide that interacts with the RNA-binding part of the protein. In some, the RNA-binding region is adjacent to a DNA-cutting part of the protein. Others appear to bind to other proteins, which suggests they might help direct those proteins to DNA targets.

Zhang and his team experimented with dozens of Tas proteins, demonstrating that some can be programmed to make targeted cuts to DNA in human cells. As they think about developing TIGR-Tas systems into programmable tools, the researchers are encouraged by features that could make those tools particularly flexible and precise.

They note that CRISPR systems can only be directed to segments of DNA that are flanked by short motifs known as PAMs (protospacer adjacent motifs). TIGR Tas proteins, in contrast, have no such requirement. “This means theoretically, any site in the genome should be targetable,” says scientific advisor Rhiannon Macrae. The team’s experiments also show that TIGR systems have what Faure calls a “dual-guide system,” interacting with both strands of the DNA double helix to home in on their target sequences, which should ensure they act only where they are directed by their RNA guide. What’s more, Tas proteins are compact—a quarter of the size Cas9 on average—making them easier to deliver, which could overcome a major obstacle to therapeutic deployment of gene editing tools.

Excited by their discovery, Zhang’s team is now investigating the natural role of TIGR systems in viruses as well as how they can be adapted for research or therapeutics. They have determined the molecular structure of one of the Tas proteins they found to work in human cells, and will use that information to guide their efforts to make it more efficient. Additionally, they note connections between TIGR-Tas systems and certain RNA-processing proteins in human cells. “I think there’s more there to study in terms of what some of those relationships may be, and it may help us better understand how these systems are used in humans,” Zhang says.

This work was supported by the Helen Hay Whitney Foundation, Howard Hughes Medical Institute, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, Broad Institute Programmable Therapeutics Gift Donors, Pershing Square Foundation, William Ackman, and Neri Oxman, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

How nature organizes itself, from brain cells to ecosystems

McGovern Associate Investigator Ila Fiete. Photo: Caitlin Cunningham

Look around, and you’ll see it everywhere: the way trees form branches, the way cities divide into neighborhoods, the way the brain organizes into regions. Nature loves modularity—a limited number of self-contained units that combine in different ways to perform many functions. But how does this organization arise? Does it follow a detailed genetic blueprint, or can these structures emerge on their own?

A new study from McGovern Associate Investigator Ila Fiete suggests a surprising answer.

In findings published today in Nature, Fiete, a professor of brain and cognitive sciences and director of the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, reports that a mathematical model called peak selection can explain how modules emerge without strict genetic instructions. Her team’s findings, which apply to brain systems and ecosystems, help explain how modularity occurs across nature, no matter the scale.

Joining two big ideas

“Scientists have debated how modular structures form. One hypothesis suggests that various genes are turned on at different locations to begin or end a structure. This explains how insect embryos develop body segments, with genes turning on or off at specific concentrations of a smooth chemical gradient in the insect egg,” says Fiete, who is the senior author of the paper. Mikail Khona, a former graduate student and K. Lisa Yang ICoN Center Graduate Fellow, and postdoctoral associate Sarthak Chandra also led the study.

Another idea, inspired by mathematician Alan Turing, suggests that a structure could emerge from competition—small-scale interactions can create repeating patterns, like the spots on a cheetah or the ripples in sand dunes.

Both ideas work well in some cases, but fail in others. The new research suggests that nature need not pick one approach over the other. The authors propose a simple mathematical principle called peak selection, showing that when a smooth gradient is paired with local interactions that are competitive, modular structures emerge naturally. “In this way, biological systems can organize themselves into sharp modules without detailed top-down instruction,” says Chandra.

Modular systems in the brain

The researchers tested their idea on grid cells, which play a critical role in spatial navigation as well as the storage of episodic memories. Grid cells fire in a repeating triangular pattern as animals move through space, but they don’t all work at the same scale—they are organized into distinct modules, each responsible for mapping space at slightly different resolutions.

A visual depiction of two different modules in grid cells, used to map space at slightly different resolutions. Image: Fiete Lab

No one knows how these modules form, but Fiete’s model shows that gradual variations in cellular properties along one dimension in the brain, combined with local neural interactions, could explain the entire structure. The grid cells naturally sort themselves into distinct groups with clear boundaries, without external maps or genetic programs telling them where to go. “Our work explains how grid cell modules could emerge. The explanation tips the balance toward the possibility of self-organization. It predicts that there might be no gene or intrinsic cell property that jumps when the grid cell scale jumps to another module,” notes Khona.

Modular systems in nature

The same principle applies beyond neuroscience. Imagine a landscape where temperatures and rainfall vary gradually over a space. You might expect species to be spread and also vary smoothly over this region. But in reality, ecosystems often form species clusters with sharp boundaries—distinct ecological “neighborhoods” that don’t overlap.

Fiete’s study suggests why: Local competition, cooperation, and predation between species interact with the global environmental gradients to create natural separations, even when the underlying conditions change gradually. This phenomenon can be explained using peak selection—and suggests that the same principle that shapes brain circuits could also be at play in forests and oceans.

A self-organizing world

One of the researchers’ most striking findings is that modularity in these systems is remarkably robust. Change the size of the system, and the number of modules stays the same, they just scale up or down. That means a mouse brain and a human brain could use the same fundamental rules to form their navigation circuits, just at different sizes.

The model also makes testable predictions. If it’s correct, grid cell modules should follow simple spacing ratios. In ecosystems, species distributions should form distinct clusters even without sharp environmental shifts.

Fiete notes that their work adds another conceptual framework to biology. “Peak selection can inform future experiments, not only in grid cell research but across developmental biology.”

Seeing more in expansion microscopy

In biology, seeing can lead to understanding, and researchers in Edward Boyden’s lab at MIT’s McGovern Institute are committed to bringing life into sharper focus. With a pair of new methods, they are expanding the capabilities of expansion microscopy—a high-resolution imaging technique the group introduced in 2015—so researchers everywhere can see more when they look at cells and tissues under a light microscope.

McGovern Institute Investigator Edward Boyden. Photo: Justin Knight

“We want to see everything, so we’re always trying to improve it,” says Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT.  “A snapshot of all life, down to its fundamental building blocks, is really the goal.” Boyden is also a Howard Hughes Medical Institute investigator and a member of the Yang Tan Collective at MIT.

With new ways of staining their samples and processing images, users of expansion microscopy can now see vivid outlines of the shapes of cells in their images and pinpoint the locations of many different proteins inside a single tissue sample with resolution that far exceeds that of conventional light microscopy. These advances, both reported in the journal Nature Communications, enable new ways of tracing the slender projections of neurons and visualizing spatial relationships between molecules that contribute to health and disease.

Expansion microscopy uses a water-absorbing hydrogel to physically expand biological tissues. After a tissue sample has been permeated by the hydrogel, it is hydrated. The hydrogel swells as it absorbs water, preserving the relative locations of molecules in the tissue as it gently pulls them away from one another. As a result, crowded cellular components appear separate and distinct when the expanded tissue is viewed under a light microscope. The approach, which can be performed using standard laboratory equipment, has made super-resolution imaging accessible to most research teams.

Since first developing expansion microscopy, Boyden and his team have continued to enhance the method—increasing its resolution, simplifying the procedure, devising new features, and integrating it with other tools.

Visualizing cell membranes

One of the team’s latest advances is a method called ultrastructural membrane expansion microscopy (umExM), which they described in the February 12 issue of Nature Communications. With it, biologists can use expansion microscopy to visualize the thin membranes that form the boundaries of cells and enclose the organelles inside them. These membranes, built mostly of molecules called lipids, have been notoriously difficult to densely label in intact tissues for imaging with light microscopy. Now, researchers can use umExM to study cellular ultrastructure and organization within tissues.

Tay Shin, a former graduate student in Boyden’s lab and a J. Douglas Tan Fellow in the Tan-Yang Center for Autism Research at MIT, led the development of umExM. “Our goal was very simple at first: Let’s label membranes in intact tissue, much like how an electron microscope uses osmium tetroxide to label membranes to visualize the membranes in tissue,” he says. “It turns out that it’s extremely hard to achieve this.”

The team first needed to design a label that would make the membranes in tissue samples visible under a light microscope. “We almost had to start from scratch,” Shin says. “We really had to think about the fundamental characteristics of the probe that is going to label the plasma membrane, and then think about how to incorporate them into expansion microscopy.” That meant engineering a molecule that would associate with the lipids that make up the membrane and link it to both the hydrogel used to expand the tissue sample and a fluorescent molecule for visibility.

After optimizing the expansion microscopy protocol for membrane visualization and extensively testing and improving potential probes, Shin found success one late night in the lab. He placed an expanded tissue sample on a microscope and saw sharp outlines of cells.

Traceability of umExM. 3D rendering of 20 manually traced and reconstructed myelinated axons in the corpus callosum. Image: Ed Boyden

Because of the high resolution enabled by expansion, the method allowed Boyden’s team to identify even the tiny dendrites that protrude from neurons and clearly see the long extensions of their slender axons. That kind of clarity could help researchers follow individual neurons’ paths within the densely interconnected networks of the brain, the researchers say.

Boyden calls tracing these neural processes “a top priority of our time in brain science.” Such tracing has traditionally relied heavily on electron microscopy, which requires specialized skills and expensive equipment. Shin says that because expansion microscopy uses a standard light microscope, it is far more accessible to laboratories worldwide.

Shin and Boyden point out that users of expansion microscopy can learn even more about their samples when they pair the new ability to reveal lipid membranes with fluorescent labels that show where specific proteins are located. “That’s important, because proteins do a lot of the work of the cell, but you want to know where they are with respect to the cell’s structure,” Boyden says.

One sample, many proteins

To that end, researchers no longer have to choose just a few proteins to see when they use expansion microscopy. With a new method called multiplexed expansion revealing (multiExR), users can now label and see more than 20 different proteins in a single sample. Biologists can use the method to visualize sets of proteins, see how they are organized with respect to one another, and generate new hypotheses about how they might interact.

A key to the new method, reported November 9, 2024, in Nature Communications, is the ability to repeatedly link fluorescently labeled antibodies to specific proteins in an expanded tissue sample, image them, then strip these away and use a new set of antibodies to reveal a new set of proteins. Postdoctoral fellow Jinyoung Kang fine-tuned each step of this process, assuring tissue samples stayed intact and the labeled proteins produced bright signals in each round of imaging.

After capturing many images of a single sample, Boyden’s team faced another challenge: how to ensure those images were in perfect alignment so they could be overlaid with one another, producing a final picture that showed the precise positions of all of the proteins that had been labeled and visualized one by one.

Expansion microscopy lets biologists visualize some of cells’ tiniest features—but to find the same features over and over again during multiple rounds of imaging, Boyden’s team first needed to home in on a larger structure. “These fields of view are really tiny, and you’re trying to find this really tiny field of view in a gel that’s actually become quite large once you’ve expanded it,” explains Margaret Schroeder, a graduate student in Boyden’s lab who, with Kang, led the development of multiExR.

“Here’s one of the most famous receptors in all of neuroscience, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience.” – Ed Boyden

To navigate to the right spot every time, the team decided to label the blood vessels that pass through each tissue sample and use these as a guide. To enable precise alignment, certain fine details also needed to consistently appear in every image; for this, the team labeled several structural proteins. With these reference points and customized imaging processing software, the team was able to integrate all of their images of a sample into one, revealing how proteins that had been visualized separately were arranged relative to one another.

The team used multiExR to look at amyloid plaques—the aberrant protein clusters that notoriously develop in brains affected by Alzheimer’s disease. “We could look inside those amyloid plaques and ask, what’s inside of them? And because we can stain for many different proteins, we could do a high throughput exploration,” Boyden says. The team chose 23 different proteins to view in their images. The approach revealed some surprises, such as the presence of certain neurotransmitter receptors (AMPARs). “Here’s one of the most famous receptors in all of neuroscience, and there it is, hiding out in one of the most famous molecular hallmarks of pathology in neuroscience,” says Boyden. It’s unclear what role, if any, the receptors play in Alzheimer’s disease—but the finding illustrates how the ability to see more inside cells can expose unexpected aspects of biology and raise new questions for research.

Funding for this work came from MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, the US Army, Cancer Research UK, the New York Stem Cell Foundation, the National Institutes of Health, Lore McGovern, Good Ventures, Schmidt Futures. Samsung, MathWorks, the Collamore-Rogers Fellowship, the National Science Foundation, Alana Foundation USA, the Halis Family Foundation, Lester A. Gimpelson, Donald and Glenda Mattes, David B. Emmes, Thomas A. Stocky, Avni U. Shah, Kathleen Octavio, Good Ventures/Open Philanthropy, and the European Union’s Horizon 2020 program.

Evelina Fedorenko receives Troland Award from National Academy of Sciences

The National Academy of Sciences (NAS) announced today that McGovern Investigator Evelina Fedorenko will receive a 2025 Troland Research Award for her groundbreaking contributions towards understanding the language network in the human brain.

The Troland Research Award is given annually to recognize unusual achievement by early-career researchers within the broad spectrum of experimental psychology.

Two women and one child looking at a computer screen.
McGovern Investigator Ev Fedorenko (center) looks at a young subject’s brain scan in the Martinos Imaging Center at MIT. Photo: Alexandra Sokhina

Fedorenko, who is an associate professor of brain and cognitive sciences at MIT, is interested in how minds and brains create language. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems.  Her novel methods combine precise measures of an individual’s brain organization with innovative computational modeling to make fundamental discoveries about the computations that underlie the uniquely human ability for language.

Fedorenko has shown that the language network is selective for language processing over diverse non-linguistic processes that have been argued to share computational demands with language, such as math, music, and social reasoning. Her work has also demonstrated that syntactic processing is not localized to a particular region within the language network, and every brain region that responds to syntactic processing is at least as sensitive to word meanings.

She has also shown that representations from neural network language models, such as ChatGPT, are similar to those in the human language brain areas. Fedorenko also highlighted that although language models can master linguistic rules and patterns, they are less effective at using language in real-world situations. In the human brain, that kind of functional competence is distinct from formal language competence, she says, requiring not just language-processing circuits but also brain areas that store knowledge of the world, reason, and interpret social interactions. Contrary to a prominent view that language is essential for thinking, Fedorenko argues that language is not the medium of thought and is primarily a tool for communication.

A probabilistic atlas of the human language network based on >800 individuals (center) and sample individual language networks, which illustrate inter-individual variability in the precise locations and shapes of the language areas. Image: Ev Fedorenko

Ultimately, Fedorenko’s cutting-edge work is uncovering the computations and representations that fuel language processing in the brain. She will receive the Troland Award this April, during the annual meeting of the NAS in Washington DC.

 

 

 

How one brain circuit encodes memories of both places and events

Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.

A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.

“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”

MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.

An index of memories

To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.

In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.

“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” says Fiete, who is also the director of the K. Lisa Yang ICoN Center at MIT. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”

Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.

An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.

In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.

“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.

In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.

When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.

“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”

Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.

Modeling memory cliffs and palaces

The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.

While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.

The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.

The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.

The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.

The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.

Scientists engineer CRISPR enzymes that evade the immune system

The core components of CRISPR-based genome-editing therapies are bacterial proteins called nucleases that can stimulate unwanted immune responses in people, increasing the chances of side effects and making these therapies potentially less effective.

Researchers at the Broad Institute of MIT and Harvard and Cyrus Biotechnology have now engineered two CRISPR nucleases, Cas9 and Cas12, to mask them from the immune system. The team identified protein sequences on each nuclease that trigger the immune system and used computational modeling to design new versions that evade immune recognition. The engineered enzymes had similar gene-editing efficiency and reduced immune responses compared to standard nucleases in mice.

Appearing today in Nature Communications, the findings could help pave the way for safer, more efficient gene therapies. The study was led by Feng Zhang, a core institute member at the Broad and an Investigator at the McGovern Institute for Brain Research at MIT.

“As CRISPR therapies enter the clinic, there is a growing need to ensure that these tools are as safe as possible, and this work tackles one aspect of that challenge,” said Zhang, who is also a co-director of the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics, the James and Patricia Poitras Professor of Neuroscience, and a professor at MIT. He is an Investigator at the Howard Hughes Medical Institute.

Rumya Raghavan, a graduate student in Zhang’s lab when the study began, and Mirco Julian Friedrich, a postdoctoral scholar in Zhang’s lab, were co-first authors on the study.

“People have known for a while that Cas9 causes an immune response, but we wanted to pinpoint which parts of the protein were being recognized by the immune system and then engineer the proteins to get rid of those parts while retaining its function,” said Raghavan.

“Our goal was to use this information to create not only a safer therapy, but one that is potentially even more effective because it is not being eliminated by the immune system before it can do its job,” added Friedrich.

In search of immune triggers

Many CRISPR-based therapies use nucleases derived from bacteria. About 80 percent of people have pre-existing immunity to these proteins through everyday exposure to these bacteria, but scientists didn’t know which parts of the nucleases the immune system recognized.

To find out, Zhang’s team used a specialized type of mass spectrometry to identify and analyze the Cas9 and Cas 12 protein fragments recognized by immune cells. For each of two nucleases — Cas9 from Streptococcus pyogenes and Cas12 from Staphylococcus aureus — they identified three short sequences, about eight amino acids long, that evoked an immune response. They then partnered with Cyrus Biotechnology, a company co-founded by University of Washington biochemist David Baker that develops structure-based computational tools to design proteins that evade the immune response. After Zhang’s team identified immunogenic sequences in Cas9 and Cas12, Cyrus used these computational approaches to design versions of the nucleases that did not include the immune-triggering sequences.

Zhang’s lab used prediction software to validate that the new nucleases were less likely to trigger immune responses. Next, the team engineered a panel of new nucleases informed by these predictions and tested the most promising candidates in human cells and in mice that were genetically modified to bear key components of the human immune system. In both cases, they found that the engineered enzymes resulted in significantly reduced immune responses compared to the original nucleases, but still cut DNA at the same efficiency.

Minimally immunogenic nucleases are just one part of safer gene therapies, Zhang’s team says. In the future, they hope their methods may also help scientists design delivery vehicles to evade the immune system.

This study was funded in part by the Poitras Center for Psychiatric Disorders Research, the K. Lisa. Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience and the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT.

Feng Zhang awarded 2024 National Medal of Technology

This post is adapted from an MIT News story.

***

Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and an Investigator at the McGovern Institute, has won the National Medal of Technology and Innovation, the nation’s highest recognition for scientists and engineers. The prestigious award recognizes “American innovators whose vision, intellect, creativity, and determination have strengthened America’s economy and improved our quality of life.”

Zhang, who is also a professor of brain and cognitive sciences and biological engineering at MIT, a core member of the Broad Institute of MIT and Harvard, and an investigator with the Howard Hughes Medical Institute, was recognized for his work developing molecular tools, including the CRISPR genome-editing system, that have accelerated biomedical research and led to the first FDA-approved gene editing therapy.

This year, the White House awarded the National Medal of Science to 14 recipients and named nine individual awardees of the National Medal of Technology and Innovation, along with two organizations. Zhang is among four MIT faculty members who were awarded the nation’s highest honors for exemplary achievement and leadership in science and technology.

Designing molecular tools

Zhang, who earned his undergraduate degree from Harvard University in 2004, has contributed to the development of multiple molecular tools to accelerate the understanding of human disease. While a graduate student at Stanford University, from which he received his PhD in 2009, Zhang worked in the lab of Professor Karl Deisseroth. There, he worked on a protein called channelrhodopsin, which he and Deisseroth believed held potential for engineering mammalian cells to respond to light.

The resulting technique, known as optogenetics, is now used widely used in neuroscience and other fields. By engineering neurons to express light-sensitive proteins such as channelrhodopsin, researchers can either stimulate or silence the cells’ electrical impulses by shining different wavelengths of light on them. This has allowed for detailed study of the roles of specific populations of neurons in the brain, and the mapping of neural circuits that control a variety of behaviors.

In 2011, about a month after joining the MIT faculty, Zhang attended a talk by Harvard Medical School Professor Michael Gilmore, who studies the pathogenic bacterium Enteroccocus. The scientist mentioned that these bacteria protect themselves from viruses with DNA-cutting enzymes known as nucleases, which are part of a defense system known as CRISPR.

“I had no idea what CRISPR was, but I was interested in nucleases,” Zhang told MIT News in 2016. “I went to look up CRISPR, and that’s when I realized you might be able to engineer it for use for genome editing.”

In January 2013, Zhang and members of his lab reported that they had successfully used CRISPR to edit genes in mammalian cells. The CRISPR system includes a nuclease called Cas9, which can be directed to cut a specific genetic target by RNA molecules known as guide strands.

Since then, scientists in fields from medicine to plant biology have used CRISPR to study gene function and modify faulty genes that cause disease. More recently, Zhang’s lab has devised many enhancements to the original CRISPR system, such as making the targeting more precise and preventing unintended cuts in the wrong locations. In 2023, the FDA approved Casgevy, a CRISPR gene therapy based on Zhang’s discoveries, for the treatment of sickle cell disease and beta thalassemia.

The National Medal of Technology and Innovation was established in 1980 and is administered for the White House by the U.S. Department of Commerce’s Patent and Trademark Office. The award recognizes those who have made lasting contributions to America’s competitiveness and quality of life and helped strengthen the nation’s technological workforce.

How the brain prevents us from falling

This post is adapted from an MIT research news story.

***

As we navigate the world, we adapt our movement in response to changes in the environment. From rocky terrain to moving escalators, we seamlessly modify our movements to maximize energy efficiency and our reduce risk of falling. The computational principles underlying this phenomenon, however, are not well understood.

In a recent paper published in the journal Nature Communications, MIT researchers proposed a model that explains how humans continuously adapt yet remain stable during complex tasks like walking.

“Much of our prior theoretical understanding of adaptation has been limited to episodic tasks, such as reaching for an object in a novel environment,” says senior author Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Assistant Professor of Brain and Cognitive Sciences at MIT. “This new theoretical model captures adaptation phenomena in continuous long-horizon tasks in multiple locomotor settings.”

Barrett Clark, a robotics software engineer at Bright Minds Inc and and Manoj Srinivasan, an associate professor in the Department of Mechanical and Aerospace Engineering at Ohio State University, are also authors on the paper.

Principles of locomotor adaptation

In episodic tasks, like reaching for an object, errors during one episode do not affect the next episode. In tasks like locomotion, errors can have a cascade of short-term and long-term consequences to stability unless they are controlled. This makes the challenge of adapting locomotion in a new environment  more complex.

To build the model, the researchers identified general principles of locomotor adaptation across a variety of task settings, and  developed a unified modular and hierarchical model of locomotor adaptation, with each component having its own unique mathematical structure.

The resulting model successfully encapsulates how humans adapt their walking in novel settings such as on a split-belt treadmill with each foot at a different speed, wearing asymmetric leg weights, and wearing  an exoskeleton. The authors report that the model successfully reproduced human locomotor adaptation phenomena across novel settings in 10 prior studies and correctly predicted the adaptation behavior observed in two new experiments conducted as part of the study.

The model has potential applications in sensorimotor learning, rehabilitation, and wearable robotics.

“Having a model that can predict how a person will adapt to a new environment has immense utility for engineering better rehabilitation paradigms and wearable robot control,” says Seethapathi, who is also an associate investigator at MIT’s McGovern Institute. “You can think of a wearable robot itself as a new environment for the person to move in, and our model can be used to predict how a person will adapt for different robot settings. Understanding such human-robot adaptation is currently an experimentally intensive process, and our model  could help speed up the process by narrowing the search space.”