Microscopy technique could enable more informative biopsies

MIT and Harvard Medical School researchers have devised a way to image biopsy samples with much higher resolution — an advance that could help doctors develop more accurate and inexpensive diagnostic tests.

For more than 100 years, conventional light microscopes have been vital tools for pathology. However, fine-scale details of cells cannot be seen with these scopes. The new technique relies on an approach known as expansion microscopy, developed originally in Edward Boyden’s lab at MIT, in which the researchers expand a tissue sample to 100 times its original volume before imaging it.

This expansion allows researchers to see features with a conventional light microscope that ordinarily could be seen only with an expensive, high-resolution electron microscope. It also reveals additional molecular information that the electron microscope cannot provide.

“It’s a technique that could have very broad application,” says Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. He is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

In a paper appearing in the 17 July issue of Nature Biotechnology, Boyden and his colleagues used this technique to distinguish early-stage breast lesions with high or low risk of progressing to cancer — a task that is challenging for human observers. This approach can also be applied to other diseases: In an analysis of kidney tissue, the researchers found that images of expanded samples revealed signs of kidney disease that can normally only be seen with an electron microscope.

“Using expansion microscopy, we are able to diagnose diseases that were previously impossible to diagnose with a conventional light microscope,” says Octavian Bucur, an instructor at Harvard Medical School, Beth Israel Deaconess Medical Center (BIDMC), and the Ludwig Center at Harvard, and one of the paper’s lead authors.

MIT postdoc Yongxin Zhao is the paper’s co-lead author. Boyden and Andrew Beck, a former associate professor at Harvard Medical School and BIDMC, are the paper’s senior authors.


“A few chemicals and a light microscope”

Boyden’s original expansion microscopy technique is based on embedding tissue samples in a dense, evenly generated polymer that swells when water is added. Before the swelling occurs, the researchers anchor to the polymer gel the molecules that they want to image, and they digest other proteins that normally hold tissue together.

This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

In the new study, the researchers set out to adapt the expansion process for biopsy tissue samples, which are usually embedded in paraffin wax, flash frozen, or stained with a chemical that makes cellular structures more visible.

The MIT/Harvard team devised a process to convert these samples into a state suitable for expansion. For example, they remove the chemical stain or paraffin by exposing the tissues to a chemical solvent called xylene. Then, they heat up the sample in another chemical called citrate. After that, the tissues go through an expansion process similar to the original version of the technique, but with stronger digestion steps to compensate for the strong chemical fixation of the samples.

During this procedure, the researchers can also add fluorescent labels for molecules of interest, including proteins that mark particular types of cells, or DNA or RNA with a specific sequence.

“The work of Zhao et al. describes a very clever way of extending the resolution of light microscopy to resolve detail beyond that seen with conventional methods,” says David Rimm, a professor of pathology at the Yale University School of Medicine, who was not involved in the research.

The researchers tested this approach on tissue samples from patients with early-stage breast lesions. One way to predict whether these lesions will become malignant is to evaluate the appearance of the cells’ nuclei. Benign lesions with atypical nuclei have about a fivefold higher probability of progressing to cancer than those with typical nuclei.

However, studies have revealed significant discrepancies between the assessments of nuclear atypia performed by different pathologists, which can potentially lead to an inaccurate diagnosis and unnecessary surgery. An improved system for differentiating benign lesions with atypical and typical nuclei could potentially prevent 400,000 misdiagnoses and hundreds of millions of dollars every year in the United States, according to the researchers.

After expanding the tissue samples, the MIT/Harvard team analyzed them with a machine learning algorithm that can rate the nuclei based on dozens of features, including orientation, diameter, and how much they deviate from true circularity. This algorithm was able to distinguish between lesions that were likely to become invasive and those that were not, with an accuracy of 93 percent on expanded samples compared to only 71 percent on the pre-expanded tissue.

“These two types of lesions look highly similar to the naked eye, but one has much less risk of cancer,” Zhao says.

The researchers also analyzed kidney tissue samples from patients with nephrotic syndrome, which impairs the kidneys’ ability to filter blood. In these patients, tiny finger-like projections that filter the blood are lost or damaged. These structures are spaced about 200 nanometers apart and therefore can usually be seen only with an electron microscope or expensive super resolution microscopes.

When the researchers showed the images of the expanded tissue samples to a group of scientists that included pathologists and nonpathologists, the group was able to identify the diseased tissue with 90 percent accuracy overall, compared to only 65 percent accuracy with unexpanded tissue samples.

“Now you can diagnose nephrotic kidney disease without needing an electron microscope, a very expensive machine,” Boyden says. “You can do it with a few chemicals and a light microscope.”

Uncovering patterns

Using this approach, the researchers anticipate that scientists could develop more precise diagnostics for many other diseases. To do that, scientists and doctors will need to analyze many more patient samples, allowing them to discover patterns that would be impossible to see otherwise.

“If you can expand a tissue by one-hundredfold in volume, all other things being equal, you’re getting 100 times the information,” Boyden says.

For example, researchers could distinguish cancer cells based on how many copies of a particular gene they have. Extra copies of genes such as HER2, which the researchers imaged in one part of this study, indicate a subtype of breast cancer that is eligible for specific treatments.

Scientists could also look at the architecture of the genome, or at how cell shapes change as they become cancerous and interact with other cells of the body. Another possible application is identifying proteins that are expressed specifically on the surface of cancer cells, allowing researchers to design immunotherapies that mark those cells for destruction by the patient’s immune system.

Boyden and his colleagues run training courses several times a month at MIT, where visitors can come and watch expansion microscopy techniques, and they have made their protocols available on their website. They hope that many more people will begin using this approach to study a variety of diseases.

“Cancer biopsies are just the beginning,” Boyden says. “We have a new pipeline for taking clinical samples and expanding them, and we are finding that we can apply expansion to many different diseases. Expansion will enable computational pathology to take advantage of more information in a specimen than previously possible.”

Humayun Irshad, a research fellow at Harvard/BIDMC and an author of the study, agrees: “Expanded images result in more informative features, which in turn result in higher-performing classification models.”

Other authors include Harvard pathologist Astrid Weins, who helped oversee the kidney study. Other authors from MIT (Fei Chen) and BIDMC/Harvard (Andreea Stancu, Eun-Young Oh, Marcello DiStasio, Vanda Torous, Benjamin Glass, Isaac E. Stillman, and Stuart J. Schnitt) also contributed to this study.

The research was funded, in part, by the New York Stem Cell Foundation Robertson Investigator Award, the National Institutes of Health Director’s Pioneer Award, the Department of Defense Multidisciplinary University Research Initiative, the Open Philanthropy Project, the Ludwig Center at Harvard, and Harvard Catalyst.

A Google map of the brain

At the start of the twentieth century, Santiago Ramón y Cajal’s drawings of brain cells under the microscope revealed a remarkable diversity of cell types within the brain. Through sketch after sketch, Cajal showed that the brain was not, as many believed, a web of self-similar material, but rather that it is composed of billions of cells of many different sizes, shapes, and interconnections.

Yet more than a hundred years later, we still do not know how many cell types make up the human brain. Despite decades of study, the challenge remains daunting, as the brain’s complexity has overwhelmed attempts to describe it systematically or to catalog its parts.

Now, however, this appears about to change, thanks to an explosion of new technical advances in areas ranging from DNA sequencing to microfluidics to computing and microscopy. For the first time, a parts list for the human brain appears to be within reach.

Why is this important? “Until we know all the cell types, we won’t fully understand how they are connected together,” explains McGovern Investigator Guoping Feng. “We know that the brain’s wiring is incredibly complicated, and that the connections are key to understanding how it works, but we don’t yet have the full picture. That’s what we are aiming for. It’s like making a Google map of the brain.”

Identifying the cell types is also important for understanding disease. As genetic risk factors for different disorders are identified, researchers need to know where they act within the brain, and which cell types and connections are disrupted as a result. “Once we know that, we can start to think about new therapeutic approaches,” says Feng, who is also an institute member of the Broad Institute, where he leads the neurobiology program at the Stanley Center for Psychiatric Disorders Research.

Drop by drop

In 2012, computational biologist Naomi Habib arrived from the Hebrew University of Jerusalem to join the labs of McGovern Investigator Feng Zhang and his collaborator Aviv Regev at the Broad Institute. Habib’s plan was to learn new RNA methods as they were emerging. “I wanted to use these powerful tools to understand this fascinating system that is our brain,” she says.

Her rationale was simple, at least in theory. All cells of an organism carry the same DNA instructions, but the instructions are read out differently in each cell type. Stretches of DNA corresponding to individual genes are copied, sometimes thousands of times, into RNA molecules that in turn direct the synthesis of proteins. Differences in which sequences get copied are what give cells their identities: brain cells express RNAs that encode brain proteins, while blood cells express different RNAs, and so on. A given cell can express thousands of genes, providing a molecular “fingerprint” for each cell type.

Analyzing these RNAs can provide a great deal of information about the brain, including potentially the identities of its constituent cell types. But doing this is not easy, because the different cell types are mixed together like salt and pepper within the brain. For many years, studying brain RNA meant grinding up the tissue—an approach that has been compared to studying smoothies to learn about fruit salad.

As methods improved, it became possible to study the tiny quantities of RNA contained within single cells. This opened the door to studying the difference between individual cells, but this required painstaking manipulation of many samples, a slow and laborious process.

A breakthrough came in 2015, with the development of automated methods based on microfluidics. One of these, known as dropseq (droplet-based sequencing), was pioneered by Steve McCarroll at Harvard, in collaboration with Regev’s lab at Broad. In this method, individual cells are captured in tiny water droplets suspended in oil. Vast numbers of droplets are automatically pumped through tiny channels, where each undergoes its own separate sequencing reactions. By running multiple samples in parallel, the machines can process tens of thousands of cells and billions of sequences, within hours rather than weeks or months. The power of the method became clear when in an experiment on mouse retina, the researchers were able to identify almost every cell type that had ever been described in the retina, effectively recapitulating decades of work in a single experiment.

Dropseq works well for many tissues, but Habib wanted to apply it to the adult brain, which posed a unique challenge. Mature neurons often bear elaborate branches that become intertwined like tree roots in a forest, making it impossible to separate individual cells without damage.

Nuclear option

So Habib turned to another idea. RNA is made in the nucleus before moving to the cytoplasm, and because nuclei are compact and robust it is easy to recover them intact in large numbers, even from difficult tissues such as brain. The amount of RNA contained in a single nucleus is tiny, and Habib didn’t know if it would be enough to be informative, but Zhang and Regev encouraged her to keep going. “You have to be optimistic,” she says. “You have to try.”

Fortunately, the experiment worked. In a paper with Zhang and Regev, she was able to isolate nuclei from newly formed neurons in the adult mouse hippocampus (a brain structure involved in memory), and by analyzing their RNA profiles individually she could order them in a series according to their age, revealing their developmental history from birth to maturity.

Now, after much further experimentation, Habib and her colleagues have managed to apply the droplet method to nuclei, making it possible for the first time to analyze huge numbers of cells from adult brain—at least ten times more than with previous methods.

This opens up many new avenues, including the study of human postmortem tissue, given that RNA in nuclei can survive for years in frozen samples. Habib is already starting to examine tissue taken at autopsy from patients with Alzheimer’s and other neurodegenerative diseases. “The neurons are degenerating, but the other cells around them could also be contributing to the degenerative process,” she says. “Now we have these tools, we can look at what happens during the progression of the disease.”

Computing cells

Once the sequencing is completed, the results are analyzed using sophisticated computational methods. When the results emerge, data from individual cells are visualized as colored dots, clustered on a graph according to their statistical similarities. But because the cells were dissociated at the start of the experiment, information about their appearance and origin within the brain is lost.

To find out how these abstract displays correspond to the visible cells of the brain, Habib teamed up with Yinqing Li, a former graduate student with Zhang who is now a postdoc in the lab of Guoping Feng. Li began with existing maps from the Allen Institute, a public repository with thousands of images showing expression patterns for individual genes within mouse brain. By comparing these maps with the molecular fingerprints from Habib’s nuclear RNA sequencing experiments, Li was able to make a map of where in the brain each cell was likely to have come from.

It was a good first step, but still not perfect. “What we really need,” he says, “is a method that allows us to see every RNA in individual cells. If we are studying a brain disease, we want to know which neurons are involved in the disease process, where they are, what they are connected to, and which special genes might be involved so that we can start thinking about how to design a drug that could alter the disease.”

Expanding horizons

So Li partnered with Asmamaw (Oz) Wassie, a graduate student in the lab of McGovern Investigator Ed Boyden, to tackle the problem. Wassie had previously studied bioengineering as an MIT undergraduate, where he had helped build an electronic “artificial nose” for detecting trace chemicals in air. With support from a prestigious Hertz Fellowship, he joined Boyden’s lab, where he is now working on the development of a method known as expansion microscopy.

In this method, a sample of tissue is embedded with a polymer that swells when water is added. The entire sample expands in all directions, allowing scientists to see fine details such as connections between neurons, using an ordinary microscope. Wassie recently helped develop a way to anchor RNA molecules to the polymer matrix, allowing them to be physically secured during the expansion process. Now, within the expanded samples he can see the individual molecules using a method called fluorescent in situ hybridization (FISH), in which each RNA appears as a glowing dot under the microscope. Currently, he can label only a handful of RNA types at once, but by using special sets of probes, applied sequentially, he thinks it will soon be possible to distinguish thousands of different RNA sequences.

“That will help us to see what each cell looks like, how they are connected to each other, and what RNAs they contain,” says Wassie. By combining this information with the RNA expression data generated by Li and Habib, it will be possible to reveal the organization and fine structure of complex brain areas and perhaps to identify new cell types that have not yet been recognized.

Looking ahead

Li plans to apply these methods to a brain structure known as the thalamic reticular nucleus (TRN) – a sheet of tissue, about ten neurons thick in mice, that sits on top of the thalamus and close to the cortex. The TRN is not well understood, but it is important for controlling sleep, attention and sensory processing, and it has caught the interest of Feng and other neuroscientists because it expresses a disproportionate number of genes implicated in disorders such as autism, attention deficit hyperactivity disorder, and intelligence deficits. Together with Joshua Levin’s group at Broad, Li has already used nuclear RNA sequencing to identify the cell types in the TRN, and he has begun to examine them within intact brain using the expansion techniques. “When you map these precise cell types back to the tissue, you can integrate the gene expression information with everything else, like electrophysiology, connectivity, morphology,” says Li. “Then we can start to ask what’s going wrong in disease.”

Meanwhile, Feng is already looking beyond the TRN, and planning how to scale the approach to other structures and eventually to the entire brain. He returns to the metaphor of a Google map. “Microscopic images are like satellite photos,” he says. “Now with expansion microscopy we can add another layer of information, like property boundaries and individual buildings. And knowing which RNAs are in each cell will be like seeing who lives in those buildings. I think this will completely change how we view the brain.”

A noninvasive method for deep brain stimulation

Delivering an electrical current to a part of the brain involved in movement control has proven successful in treating many Parkinson’s disease patients. This approach, known as deep brain stimulation, requires implanting electrodes in the brain — a complex procedure that carries some risk to the patient.

Now, MIT researchers, collaborating with investigators at Beth Israel Deaconess Medical Center (BIDMC) and the IT’IS Foundation, have come up with a way to stimulate regions deep within the brain using electrodes placed on the scalp. This approach could make deep brain stimulation noninvasive, less risky, less expensive, and more accessible to patients.

“Traditional deep brain stimulation requires opening the skull and implanting an electrode, which can have complications. Secondly, only a small number of people can do this kind of neurosurgery,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1 issue of Cell.

Doctors also use deep brain stimulation to treat some patients with obsessive compulsive disorder, epilepsy, and depression, and are exploring the possibility of using it to treat other conditions such as autism. The new, noninvasive approach could make it easier to adapt deep brain stimulation to treat additional disorders, the researchers say.

“With the ability to stimulate brain structures noninvasively, we hope that we may help discover new targets for treating brain disorders,” says the paper’s lead author, Nir Grossman, a former Wellcome Trust-MIT postdoc working at MIT and BIDMC, who is now a research fellow at Imperial College London.

Deep locations

Electrodes for treating Parkinson’s disease are usually placed in the subthalamic nucleus, a lens-shaped structure located below the thalamus, deep within the brain. For many Parkinson’s patients, delivering electrical impulses in this brain region can improve symptoms, but the surgery to implant the electrodes carries risks, including brain hemorrhage and infection.

Other researchers have tried to noninvasively stimulate the brain using techniques such as transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression. Since TMS is noninvasive, it has also been used in normal human subjects to study the basic science of cognition, emotion, sensation, and movement. However, using TMS to stimulate deep brain structures can also result in surface regions being strongly stimulated, resulting in modulation of multiple brain networks.

The MIT team devised a way to deliver electrical stimulation deep within the brain, via electrodes placed on the scalp, by taking advantage of a phenomenon known as temporal interference.

This strategy requires generating two high-frequency electrical currents using electrodes placed outside the brain. These fields are too fast to drive neurons. However, these currents interfere with one another in such a way that where they intersect, deep in the brain, a small region of low-frequency current is generated inside neurons. This low-frequency current can be used to drive neurons’ electrical activity, while the high-frequency current passes through surrounding tissue with no effect.

By tuning the frequency of these currents and changing the number and location of the electrodes, the researchers can control the size and location of the brain tissue that receives the low-frequency stimulation. They can target locations deep within the brain without affecting any of the surrounding brain structures. They can also steer the location of stimulation, without moving the electrodes, by altering the currents. In this way, deep targets could be stimulated, both for therapeutic use and basic science investigations.

“You can go for deep targets and spare the overlying neurons, although the spatial resolution is not yet as good as that of deep brain stimulation,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Targeted stimulation

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. They were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai, who is an author of the paper. “I think it’s very exciting because Parkinson’s disease and other movement disorders seem to originate from a very particular region of the brain, and if you can target that, you have the potential to reverse it.”

Significantly, in the hippocampus experiments, the technique did not activate the neurons in the cortex, the region lying between the electrodes on the skull and the target deep inside the brain. The researchers also found no harmful effects in any part of the brain.

Last year, Tsai showed that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

Other authors of the paper are MIT research scientist David Bono; former MIT postdocs Suhasa Kodandaramaiah and Andrii Rudenko; MIT postdoc Nina Dedic; MIT grad student Ho-Jun Suk; Beth Israel Deaconess Medical Center and Harvard Medical School Professor Alvaro Pascual-Leone; and IT’IS Foundation researchers Antonino Cassara, Esra Neufeld, and Niels Kuster.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.

Making brain implants smaller could prolong their lifespan

Many diseases, including Parkinson’s disease, can be treated with electrical stimulation from an electrode implanted in the brain. However, the electrodes can produce scarring, which diminishes their effectiveness and can necessitate additional surgeries to replace them.

MIT researchers have now demonstrated that making these electrodes much smaller can essentially eliminate this scarring, potentially allowing the devices to remain in the brain for much longer.

“What we’re doing is changing the scale and making the procedure less invasive,” says Michael Cima, the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study, which appears in the May 16 issue of Scientific Reports.

Cima and his colleagues are now designing brain implants that can not only deliver electrical stimulation but also record brain activity or deliver drugs to very targeted locations.

The paper’s lead author is former MIT graduate student Kevin Spencer. Other authors are former postdoc Jay Sy, graduate student Khalil Ramadi, Institute Professor Ann Graybiel, and David H. Koch Institute Professor Robert Langer.

Effects of size

Many Parkinson’s patients have benefited from treatment with low-frequency electrical current delivered to a part of the brain involved in movement control. The electrodes used for this deep brain stimulation are a few millimeters in diameter. After being implanted, they gradually generate scar tissue through the constant rubbing of the electrode against the surrounding brain tissue. This process, known as gliosis, contributes to the high failure rate of such devices: About half stop working within the first six months.

Previous studies have suggested that making the implants smaller or softer could reduce the amount of scarring, so the MIT team set out to measure the effects of both reducing the size of the implants and coating them with a soft polyethylene glycol (PEG) hydrogel.

The hydrogel coating was designed to have an elasticity very similar to that of the brain. The researchers could also control the thickness of the coating. They found that when coated electrodes were pushed into the brain, the soft coating would fall off, so they devised a way to apply the hydrogel and then dry it, so that it becomes a hard, thin film. After the electrode is inserted, the film soaks up water and becomes soft again.

In mice, the researchers tested both coated and uncoated glass fibers with varying diameters and found that there is a tradeoff between size and softness. Coated fibers produced much less scarring than uncoated fibers of the same diameter. However, as the electrode fibers became smaller, down to about 30 microns (0.03 millimeters) in diameter, the uncoated versions produced less scarring, because the coatings increase the diameter.

This suggests that a 30-micron, uncoated fiber is the optimal design for implantable devices in the brain.

“Before this paper, no one really knew the effects of size,” Cima says. “Softer is better, but not if it makes the electrode larger.”

New devices

The question now is whether fibers that are only 30 microns in diameter can be adapted for electrical stimulation, drug delivery, and recording electrical activity in the brain. Cima and his colleagues have had some initial success developing such devices.

“It’s one of those things that at first glance seems impossible. If you have 30-micron glass fibers, that’s slightly thicker than a piece of hair. But it is possible to do,” Cima says.
Such devices could be potentially useful for treating Parkinson’s disease or other neurological disorders. They could also be used to remove fluid from the brain to monitor whether treatments are having the intended effect, or to measure brain activity that might indicate when an epileptic seizure is about to occur.

The research was funded by the National Institutes of Health and MIT’s Institute for Soldier Nanotechnologies.

High-resolution imaging with conventional microscopes

MIT researchers have developed a way to make extremely high-resolution images of tissue samples, at a fraction of the cost of other techniques that offer similar resolution.

The new technique relies on expanding tissue before imaging it with a conventional light microscope. Two years ago, the MIT team showed that it was possible to expand tissue volumes 100-fold, resulting in an image resolution of about 60 nanometers. Now, the researchers have shown that expanding the tissue a second time before imaging can boost the resolution to about 25 nanometers.

This level of resolution allows scientists to see, for example, the proteins that cluster together in complex patterns at brain synapses, helping neurons to communicate with each other. It could also help researchers to map neural circuits, says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.

“We want to be able to trace the wiring of complete brain circuits,” says Boyden, the study’s senior author. “If you could reconstruct a complete brain circuit, maybe you could make a computational model of how it generates complex phenomena like decisions and emotions. Since you can map out the biomolecules that generate electrical pulses within cells and that exchange chemicals between cells, you could potentially model the dynamics of the brain.”

This approach could also be used to image other phenomena such as the interactions between cancer cells and immune cells, to detect pathogens without expensive equipment, and to map the cell types of the body.

Former MIT postdoc Jae-Byum Chang is the first author of the paper, which appears in the April 17 issue of Nature Methods.

Double expansion

To expand tissue samples, the researchers embed them in a dense, evenly generated gel made of polyacrylate, a very absorbent material that’s also used in diapers. Before the gel is formed, the researchers label the cell proteins they want to image, using antibodies that bind to specific targets. These antibodies bear “barcodes” made of DNA, which in turn are attached to cross-linking molecules that bind to the polymers that make up the expandable gel. The researchers then break down the proteins that normally hold the tissue together, allowing the DNA barcodes to expand away from each other as the gel swells.

These enlarged samples can then be labeled with fluorescent probes that bind the DNA barcodes, and imaged with commercially available confocal microscopes, whose resolution is usually limited to hundreds of nanometers.

Using that approach, the researchers were previously able to achieve a resolution of about 60 nanometers. However, “individual biomolecules are much smaller than that, say 5 nanometers or even smaller,” Boyden says. “The original versions of expansion microscopy were useful for many scientific questions but couldn’t equal the performance of the highest-resolution imaging methods such as electron microscopy.”

In their original expansion microscopy study, the researchers found that they could expand the tissue more than 100-fold in volume by reducing the number of cross-linking molecules that hold the polymer in an orderly pattern. However, this made the tissue unstable.

“If you reduce the cross-linker density, the polymers no longer retain their organization during the expansion process,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. “You lose the information.”

Instead, in their latest study, the researchers modified their technique so that after the first tissue expansion, they can create a new gel that swells the tissue a second time — an approach they call “iterative expansion.”

Mapping circuits

Using iterative expansion, the researchers were able to image tissues with a resolution of about 25 nanometers, which is similar to that achieved by high-resolution techniques such as stochastic optical reconstruction microscopy (STORM). However, expansion microscopy is much cheaper and simpler to perform because no specialized equipment or chemicals are required, Boyden says. The method is also much faster and thus compatible with large-scale, 3-D imaging.

The resolution of expansion microscopy does not yet match that of scanning electron microscopy (about 5 nanometers) or transmission electron microscopy (about 1 nanometer). However, electron microscopes are very expensive and not widely available, and with those microscopes, it is difficult for researchers to label specific proteins.

In the Nature Methods paper, the MIT team used iterative expansion to image synapses — the connections between neurons that allow them to communicate with each other. In their original expansion microscopy study, the researchers were able to image scaffolding proteins, which help to organize the hundreds of other proteins found in synapses. With the new, enhanced resolution, the researchers were also able to see finer-scale structures, such as the location of neurotransmitter receptors located on the surfaces of the “postsynaptic” cells on the receiving side of the synapse.

“My hope is that we can, in the coming years, really start to map out the organization of these scaffolding and signaling proteins at the synapse,” Boyden says.

Combining expansion microscopy with a new tool called temporal multiplexing should help to achieve that, he believes. Currently, only a limited number of colored probes can be used to image different molecules in a tissue sample. With temporal multiplexing, researchers can label one molecule with a fluorescent probe, take an image, and then wash the probe away. This can then be repeated many times, each time using the same colors to label different molecules.

“By combining iterative expansion with temporal multiplexing, we could in principle have essentially infinite-color, nanoscale-resolution imaging over large 3-D volumes,” Boyden says. “Things are getting really exciting now that these different technologies may soon connect with each other.”

The researchers also hope to achieve a third round of expansion, which they believe could, in principle, enable resolution of about 5 nanometers. However, right now the resolution is limited by the size of the antibodies used to label molecules in the cell. These antibodies are about 10 to 20 nanometers long, so to get resolution below that, researchers would need to create smaller tags or expand the proteins away from each other first and then deliver the antibodies after expansion.

This study was funded by the National Institutes of Health Director’s Pioneer Award, the New York Stem Cell Foundation Robertson Award, the HHMI-Simons Faculty Scholars Award, and the Open Philanthropy Project.

Sensor traces dopamine released by single cells

MIT chemical engineers have developed an extremely sensitive detector that can track single cells’ secretion of dopamine, a brain chemical responsible for carrying messages involved in reward-motivated behavior, learning, and memory.

Using arrays of up to 20,000 tiny sensors, the researchers can monitor dopamine secretion of single neurons, allowing them to explore critical questions about dopamine dynamics. Until now, that has been very difficult to do.

“Now, in real-time, and with good spatial resolution, we can see exactly where dopamine is being released,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering and the senior author of a paper describing the research, which appears in the Proceedings of the National Academy of Sciences the week of Feb. 6.

Strano and his colleagues have already demonstrated that dopamine release occurs differently than scientists expected in a type of neural progenitor cell, helping to shed light on how dopamine may exert its effects in the brain.

The paper’s lead author is Sebastian Kruss, a former MIT postdoc who is now at Göttingen University, in Germany. Other authors are Daniel Salem and Barbara Lima, both MIT graduate students; Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences, as well as a member of the MIT Media Lab and the McGovern Institute for Brain Research; Lela Vukovic, an assistant professor of chemistry at the University of Texas at El Paso; and Emma Vander Ende, a graduate student at Northwestern University.

“A global effect”

Dopamine is a neurotransmitter that plays important roles in learning, memory, and feelings of reward, which reinforce positive experiences.

Neurotransmitters allow neurons to relay messages to nearby neurons through connections known as synapses. However, unlike most other neurotransmitters, dopamine can exert its effects beyond the synapse: Not all dopamine released into a synapse is taken up by the target cell, allowing some of the chemical to diffuse away and affect other nearby cells.

“It has a local effect, which controls the signaling through the neurons, but also it has a global effect,” Strano says. “If dopamine is in the region, it influences all the neurons nearby.”

Tracking this dopamine diffusion in the brain has proven difficult. Neuroscientists have tried using electrodes that are specialized to detect dopamine, but even using the smallest electrodes available, they can place only about 20 near any given cell.

“We’re at the infancy of really understanding how these packets of chemicals move and their directionality,” says Strano, who decided to take a different approach.

Strano’s lab has previously developed sensors made from arrays of carbon nanotubes — hollow, nanometer-thick cylinders made of carbon, which naturally fluoresce when exposed to laser light. By wrapping these tubes in different proteins or DNA strands, scientists can customize them to bind to different types of molecules.

The carbon nanotube sensors used in this study are coated with a DNA sequence that makes the sensors interact with dopamine. When dopamine binds to the carbon nanotubes, they fluoresce more brightly, allowing the researchers to see exactly where the dopamine was released. The researchers deposited more than 20,000 of these nanotubes on a glass slide, creating an array that detects any dopamine secreted by a cell placed on the slide.

Dopamine diffusion

In the new PNAS study, the researchers used these dopamine sensors to explore a longstanding question about dopamine release in the brain: From which part of the cell is dopamine secreted?

To help answer that question, the researchers placed individual neural progenitor cells known as PC-12 cells onto the sensor arrays. PC-12 cells, which develop into neuron-like cells under the right conditions, have a starfish-like shape with several protrusions that resemble axons, which form synapses with other cells.

After stimulating the cells to release dopamine, the researchers found that certain dopamine sensors near the cells lit up immediately, while those farther away turned on later as the dopamine diffused away. Tracking those patterns over many seconds allowed the researchers to trace how dopamine spreads away from the cells.

Strano says one might expect to see that most of the dopamine would be released from the tips of the arms extending out from the cells. However, the researchers found that in fact more dopamine came from the sides of the arms.

“We have falsified the notion that dopamine should only be released at these regions that will eventually become the synapses,” Strano says. “This observation is counterintuitive, and it’s a new piece of information you can only obtain with a nanosensor array like this one.”

The team also showed that most of the dopamine traveled away from the cell, through protrusions extending in opposite directions. “Even though dopamine is not necessarily being released only at the tip of these protrusions, the direction of release is associated with them,” Salem says.

Other questions that could be explored using these sensors include how dopamine release is affected by the direction of input to the cell, and how the presence of nearby cells influences each cell’s dopamine release.

The research was funded by the National Science Foundation, the National Institutes of Health, a University of Illinois Center for the Physics of Living Cells Postdoctoral Fellowship, the German Research Foundation, and a Liebig Fellowship.

A radiation-free approach to imaging molecules in the brain

Scientists hoping to get a glimpse of molecules that control brain activity have devised a new probe that allows them to image these molecules without using any chemical or radioactive labels.

Currently the gold standard approach to imaging molecules in the brain is to tag them with radioactive probes. However, these probes offer low resolution and they can’t easily be used to watch dynamic events, says Alan Jasanoff, an MIT professor of biological engineering.

Jasanoff and his colleagues have developed new sensors consisting of proteins designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” Jasanoff says. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

In a paper appearing in the Dec. 2 issue of Nature Communications, Jasanoff and his colleagues used these probes to detect enzymes called proteases, but their ultimate goal is to use them to monitor the activity of neurotransmitters, which act as chemical messengers between brain cells.

The paper’s lead authors are postdoc Mitul Desai and former MIT graduate student Adrian Slusarczyk. Recent MIT graduate Ashley Chapin and postdoc Mariya Barch are also authors of the paper.

Indirect imaging

To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

“These are molecules that aren’t visualized directly, but instead produce changes in the body that can then be visualized very effectively by imaging,” Jasanoff says.

Proteases are sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease. However, Jasanoff’s lab used them in this study mainly to demonstrate the validity their approach. Now, they are working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, that are critical to cognition and processing emotions.

To do that, the researchers plan to modify the cages surrounding the CGRP so that they can be removed by interaction with a particular neurotransmitter.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

Tracking genes

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.

Researchers create synthetic cells to isolate genetic circuits

Synthetic biology allows scientists to design genetic circuits that can be placed in cells, giving them new functions such as producing drugs or other useful molecules. However, as these circuits become more complex, the genetic components can interfere with each other, making it difficult to achieve more complicated functions.

MIT researchers have now demonstrated that these circuits can be isolated within individual synthetic “cells,” preventing them from disrupting each other. The researchers can also control communication between these cells, allowing for circuits or their products to be combined at specific times.

“It’s a way of having the power of multicomponent genetic cascades, along with the ability to build walls between them so they won’t have cross-talk. They won’t interfere with each other in the way they would if they were all put into a single cell or into a beaker,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. Boyden is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

This approach could allow researchers to design circuits that manufacture complex products or act as sensors that respond to changes in their environment, among other applications.

Boyden is the senior author of a paper describing this technique in the Nov. 14 issue of Nature Chemistry. The paper’s lead authors are former MIT postdoc Kate Adamala, who is now an assistant professor at the University of Minnesota, and former MIT grad student Daniel Martin-Alarcon. Katriona Guthrie-Honea, a former MIT research assistant, is also an author of the paper.

Circuit control

The MIT team encapsulated their genetic circuits in droplets known as liposomes, which have a fatty membrane similar to cell membranes. These synthetic cells are not alive but are equipped with much of the cellular machinery necessary to read DNA and manufacture proteins.

By segregating circuits within their own liposomes, the researchers are able to create separate circuit subroutines that could not run in the same container at the same time, but can run in parallel to each other, communicating in controlled ways. This approach also allows scientists to repurpose the same genetic tools, including genes and transcription factors (proteins that turn genes on or off), to do different tasks within a network.

“If you separate circuits into two different liposomes, you could have one tool doing one job in one liposome, and the same tool doing a different job in the other liposome,” Martin-Alarcon says. “It expands the number of things that you can do with the same building blocks.”

This approach also enables communication between circuits from different types of organisms, such as bacteria and mammals.

As a demonstration, the researchers created a circuit that uses bacterial genetic parts to respond to a molecule known as theophylline, a drug similar to caffeine. When this molecule is present, it triggers another molecule known as doxycycline to leave the liposome and enter another set of liposomes containing a mammalian genetic circuit. In those liposomes, doxycycline activates a genetic cascade that produces luciferase, a protein that generates light.

Using a modified version of this approach, scientists could create circuits that work together to produce biological therapeutics such as antibodies, after sensing a particular molecule emitted by a brain cell or other cell.

“If you think of the bacterial circuit as encoding a computer program, and the mammalian circuit is encoding the factory, you could combine the computer code of the bacterial circuit and the factory of the mammalian circuit into a unique hybrid system,” Boyden says.

The researchers also designed liposomes that can fuse with each other in a controlled way. To do that, they programmed the cells with proteins called SNAREs, which insert themselves into the cell membrane. There, they bind to corresponding SNAREs found on surfaces of other liposomes, causing the synthetic cells to fuse. The timing of this fusion can be controlled to bring together liposomes that produce different molecules. When the cells fuse, these molecules are combined to generate a final product.

More modularity

The researchers believe this approach could be used for nearly any application that synthetic biologists are already working on. It could also allow scientists to pursue potentially useful applications that have been tried before but abandoned because the genetic circuits interfered with each other too much.

“The way that we wrote this paper was not oriented toward just one application,” Boyden says. “The basic question is: Can you make these circuits more modular? If you have everything mishmashed together in the cell, but you find out that the circuits are incompatible or toxic, then putting walls between those reactions and giving them the ability to communicate with each other could be very useful.”

Vincent Noireaux, an associate professor of physics at the University of Minnesota, described the MIT approach as “a rather novel method to learn how biological systems work.”

“Using cell-free expression has several advantages: Technically the work is reduced to cloning (nowadays fast and easy), we can link information processing to biological function like living cells do, and we work in isolation with no other gene expression occurring in the background,” says Noireaux, who was not involved in the research.

Another possible application for this approach is to help scientists explore how the earliest cells may have evolved billions of years ago. By engineering simple circuits into liposomes, researchers could study how cells might have evolved the ability to sense their environment, respond to stimuli, and reproduce.

“This system can be used to model the behavior and properties of the earliest organisms on Earth, as well as help establish the physical boundaries of Earth-type life for the search of life elsewhere in the solar system and beyond,” Adamala says.

A new player in appetite control

MIT neuroscientists have discovered that brain cells called glial cells play a critical role in controlling appetite and feeding behavior. In a study of mice, the researchers found that activating these cells stimulates overeating, and that when the cells are suppressed, appetite is also suppressed.

The findings could offer scientists a new target for developing drugs against obesity and other appetite-related disorders, the researchers say. The study is also the latest in recent years to implicate glial cells in important brain functions. Until about 10 years ago, glial cells were believed to play more of a supporting role for neurons.

“In the last few years, abnormal glial cell activities have been strongly implicated in neurodegenerative disorders. There is more and more evidence to point to the importance of glial cells in modulating neuronal function and in mediating brain disorders,” says Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience. Feng is also a member of MIT’s McGovern Institute for Brain Research and the Stanley Center for Psychiatric Research at the Broad Institute.

Feng is one of the senior authors of the study, which appears in the Oct. 18 edition of the journal eLife. The other senior author is Weiping Han, head of the Laboratory of Metabolic Medicine at the Singapore Bioimaging Consortium in Singapore. Naiyan Chen, a postdoc at the Singapore Bioimaging Consortium and the McGovern Institute, is the lead author.

Turning on appetite

It has long been known that the hypothalamus, an almond-sized structure located deep within the brain, controls appetite as well as energy expenditure, body temperature, and circadian rhythms including sleep cycles. While performing studies on glial cells in other parts of the brain, Chen noticed that the hypothalamus also appeared to have a lot of glial cell activity.

“I was very curious at that point what glial cells would be doing in the hypothalamus, since glial cells have been shown in other brain areas to have an influence on regulation of neuronal function,” she says.

Within the hypothalamus, scientists have identified two key groups of neurons that regulate appetite, known as AgRP neurons and POMC neurons. AgRP neurons stimulate feeding, while POMC neurons suppress appetite.

Until recently it has been difficult to study the role of glial cells in controlling appetite or any other brain function, because scientists haven’t developed many techniques for silencing or stimulating these cells, as they have for neurons. Glial cells, which make up about half of the cells in the brain, have many supporting roles, including cushioning neurons and helping them form connections with one another.

In this study, the research team used a new technique developed at the University of North Carolina to study a type of glial cell known as an astrocyte. Using this strategy, researchers can engineer specific cells to produce a surface receptor that binds to a chemical compound known as CNO, a derivative of clozapine. Then, when CNO is given, it activates the glial cells.

The MIT team found that turning on astrocyte activity with just a single dose of CNO had a significant effect on feeding behavior.

“When we gave the compound that specifically activated the receptors, we saw a robust increase in feeding,” Chen says. “Mice are not known to eat very much in the daytime, but when we gave drugs to these animals that express a particular receptor, they were eating a lot.”

The researchers also found that in the short term (three days), the mice did not gain extra weight, even though they were eating more.

“This raises the possibility that glial cells may also be modulating neurons that control energy expenditures, to compensate for the increased food intake,” Chen says. “They might have multiple neuronal partners and modulate multiple energy homeostasis functions all at the same time.”

When the researchers silenced activity in the astrocytes, they found that the mice ate less than normal.

Suzanne Dickson, a professor of neuroendocrinology at the University of Gothenburg in Sweden described the study as part of a “paradigm shift” toward the idea that glial cells have a less passive role than previously believed.

“We tend to think of glial cells as providing a support network for neuronal processes and that their activation is also important in certain forms of brain trauma or inflammation,” says Dickson, who was not involved in the research. “This study adds to the emerging evidence base that glial cells may also exert specific effects to control nerve cell function in normal physiology.”

Unknown interactions

Still unknown is how the astrocytes exert their effects on neurons. Some recent studies have suggested that glial cells can secrete chemical messengers such as glutamate and ATP; if so, these “gliotransmitters” could influence neuron activity.

Another hypothesis is that instead of secreting chemicals, astrocytes exert their effects by controlling the uptake of neurotransmitters from the space surrounding neurons, thereby affecting neuron activity indirectly.

Feng now plans to develop new research tools that could help scientists learn more about astrocyte-neuron interactions and how astrocytes contribute to modulation of appetite and feeding. He also hopes to learn more about whether there are different types of astrocytes that may contribute differently to feeding behavior, especially abnormal behavior.

“We really know very little about how astrocytes contribute to the modulation of appetite, eating, and metabolism,” he says. “In the future, dissecting out these functional difference will be critical for our understanding of these disorders.”

Finding a way in

Our perception of the world arises within the brain, based on sensory information that is sometimes ambiguous, allowing more than one interpretation. Familiar demonstrations of this point include the famous Necker cube and the “duck-rabbit” drawing (right) in which two different interpretations flip back and forth over time.

Another example is binocular rivalry, in which the two eyes are presented with different images that are perceived in alternation. Several years ago, this phenomenon caught the eye of Caroline Robertson, who is now a Harvard Fellow working in the lab of McGovern Investigator Nancy Kanwisher. Back when she was a graduate student at Cambridge University, Robertson realized that binocular rivalry might be used to probe the basis of autism, among the most mysterious of all brain disorders.

Robertson’s idea was based on the hypothesis that autism involves an imbalance between excitation and inhibition within the brain. Although widely supported by indirect evidence, this has been very difficult to test directly in human patients. Robertson realized that binocular rivalry might provide a way to perform such a test. The perceptual switches that occur during rivalry are thought to involve competition between different groups of neurons in the visual cortex, each group reinforcing its own interpretation via excitatory connections while suppressing the alternative interpretation through inhibitory connections. Thus, if the balance is altered in the brains of people with autism, the frequency of switching might also be different, providing a simple and easily measurable marker of the disease state.

To test this idea, Robertson recruited adults with and without autism, and presented them with two distinct and differently colored images in each eye. As expected, their perceptions switched back and forth between the two images, with short periods of mixed perception in between. This was true for both groups, but when she measured the timing of these switches, Robertson found that individuals with autism do indeed see the world in a measurably different way than people without the disorder. Individuals with autism cycle between the left and right images more slowly, with the intervening periods of mixed perception lasting longer than in people without autism. The more severe their autistic symptoms, as determined by a standard clinical behavioral evaluation, the greater the difference.

Robertson had found a marker for autism that is more objective than current methods that involve one person assessing the behavior of another. The measure is immediate and relies on brain activity that happens automatically, without people thinking about it. “Sensation is a very simple place to probe,” she says.

A top-down approach

When she arrived in Kanwisher’s lab, Robertson wanted to use brain imaging to probe the basis for the perceptual phenomenon that she had discovered. With Kanwisher’s encouragement, she began by repeating the behavioral experiment with a new group of subjects, to check that her previous results were not a fluke. Having confirmed that the finding was real, she then scanned the subjects using an imaging method called Magnetic Resonance Spectroscopy (MRS), in which an MRI scanner is reprogrammed to measure concentrations of neurotransmitters and other chemicals in the brain. Kanwisher had never used MRS before, but when Robertson proposed the experiment, she was happy to try it. “Nancy’s the kind of mentor who could support the idea of using a new technique and guide me to approach it rigorously,” says Robertson.

For each of her subjects, Robertson scanned their brains to measure the amounts of two key neurotransmitters, glutamate, which is the main excitatory transmitter in the brain, and GABA, which is the main source of inhibition. When she compared the brain chemistry to the behavioral results in the binocular rivalry task, she saw something intriguing and unexpected. In people without autism, the amount of GABA in the visual cortex was correlated with the strength of the suppression, consistent with the idea that GABA enables signals from one eye to inhibit those from the other eye. But surprisingly, there was no such correlation in the autistic individuals—suggesting that GABA was somehow unable to exert its normal suppressive effect. It isn’t yet clear exactly what is going wrong in the brains of these subjects, but it’s an early flag, says Robertson. “The next step is figuring out which part of the pathway is disrupted.”

A bottom-up approach

Robertson’s approach starts from the top-down, working backward from a measurable behavior to look for brain differences, but it isn’t the only way in. Another approach is to start with genes that are linked to autism in humans, and to understand how they affect neurons and brain circuits. This is the bottom-up approach of McGovern Investigator Guoping Feng, who studies a gene called Shank3 that codes for a protein that helps build synapses, the connections through which neurons send signals to each other. Several years ago Feng knocked out Shank3 in mice, and found that the mice exhibited behaviors reminiscent of human autism, including repetitive grooming, anxiety, and impaired social interaction and motor control.

These earlier studies involved a variety of different mutations that disabled the Shank3 gene. But when postdoc Yang Zhou joined Feng’s lab, he brought a new perspective. Zhou had come from a medical background and wanted to do an experiment more directly connected to human disease. So he suggested making a mouse version of a Shank3 mutation seen in human patients, and testing its effects.

Zhou’s experiment would require precise editing of the mouse Shank3 gene, previously a difficult and time-consuming task. But help was at hand, in the form of a collaboration with McGovern Investigator Feng Zhang, a pioneer in the development of genome-editing methods.

Using Zhang’s techniques, Zhou was able to generate mice with two different mutations: one that had been linked to human autism, and another that had been discovered in a few patients with schizophrenia.

The researchers found that mice with the autism-related mutation exhibited behavioral changes at a young age that paralleled behaviors seen in children with autism. They also found early changes in synapses within a brain region called the striatum. In contrast, mice with the schizophrenia-related gene appeared normal until adolescence, and then began to exhibit changes in behavior and also changes in the prefrontal cortex, a brain region that is implicated in human schizophrenia. “The consequences of the two different Shank3 mutations were quite different in certain aspects, which was very surprising to us,” says Zhou.

The fact that different mutations in just one gene can produce such different results illustrates exactly how complex these neuropsychiatric disorders can be. “Not only do we need to study different genes, but we also have to understand different mutations and which brain regions have what defects,” says Feng, who received funding from the Poitras Center for Affective Disorders research and the Simons Center for the Social Brain. Robertson and Kanwisher were also supported by the Simons Center.

Surprising plasticity

The brain alterations that lead to autism are thought to arise early in development, long before the condition is diagnosed, raising concerns that it may be difficult to reverse the effects once the damage is done. With the Shank3 knockout mice, Feng and his team were able to approach this question in a new way, asking what would happen if the missing gene were to be restored in adulthood.

To find the answer, lab members Yuan Mei and Patricia Monteiro, along with Zhou, studied another strain of mice, in which the Shank3 gene was switched off but could be reactivated at any time by adding a drug to their diet. When adult mice were tested six weeks after the gene was switched back on, they no longer showed repetitive grooming behaviors, and they also showed normal levels of social interaction with other mice, despite having grown up without a functioning Shank3 gene. Examination of their brains confirmed that many of the synaptic alterations were also rescued when the gene was restored.

Not every symptom was reversed by this treatment; even after six weeks or more of restored Shank3 expression, the mice continued to show heightened anxiety and impaired motor control. But even these deficits could be prevented if the Shank3 gene was restored earlier in life, soon after birth.

The results are encouraging because they indicate a surprising degree of brain plasticity, persisting into adulthood. If the results can be extrapolated to human patients, they suggest that even in adulthood, autism may be at least partially reversible if the right treatment can be found. “This shows us the possibility,” says Zhou. “If we could somehow put back the gene in patients who are missing it, it could help improve their life quality.”

Converging paths

Robertson and Feng are approaching the challenge of autism from different starting points, but already there are signs of convergence. Feng is finding early signs that his Shank3 mutant mice may have an altered balance of inhibitory and excitatory circuits, consistent with what Robertson and Kanwisher have found in humans.

Feng is continuing to study these mice, and he also hopes to study the effects of a similar mutation in non-human primates, whose brains and behaviors are more similar to those of humans than rodents. Robertson, meanwhile, is planning to establish a version of the binocular rivalry test in animal models, where it is possible to alter the balance between inhibition and excitation experimentally (for example, via a genetic mutation or a drug treatment). If this leads to changes in binocular rivalry, it would strongly support the link to the perceptual changes seen in humans.

One challenge, says Robertson, will be to develop new methods to measure the perceptions of mice and other animals. “The mice can’t tell us what they are seeing,” she says. “But it would also be useful in humans, because it would allow us to study young children and patients who are non-verbal.”

A multi-pronged approach

The imbalance hypothesis is a promising lead, but no single explanation is likely to encompass all of autism, according to McGovern director Bob Desimone. “Autism is a notoriously heterogeneous condition,” he explains. “We need to try multiple approaches in order to maximize the chance of success.”

McGovern researchers are doing exactly that, with projects underway that range from scanning children to developing new molecular and microscopic methods for examining brain changes in animal disease models. Although genetic studies provide some of the strongest clues, Desimone notes that there is also evidence for environmental contributions to autism and other brain disorders. “One that’s especially interesting to us is a maternal infection and inflammation, which in mice at least can affect brain development in ways we’re only beginning to understand.”

The ultimate goal, says Desimone, is to connect the dots and to understand how these diverse human risk factors affect brain function. “Ultimately, we want to know what these different pathways have in common,” he says. “Then we can come up with rational strategies for the development of new treatments.”