Biologists discover function of gene linked to familial ALS

MIT biologists have discovered a function of a gene that is believed to account for up to 40 percent of all familial cases of amyotrophic lateral sclerosis (ALS). Studies of ALS patients have shown that an abnormally expanded region of DNA in a specific region of this gene can cause the disease.

In a study of the microscopic worm Caenorhabditis elegans, the researchers found that the gene has a key role in helping cells to remove waste products via structures known as lysosomes. When the gene is mutated, these unwanted substances build up inside cells. The researchers believe that if this also happens in neurons of human ALS patients, it could account for some of those patients’ symptoms.

“Our studies indicate what happens when the activities of such a gene are inhibited — defects in lysosomal function. Certain features of ALS are consistent with their being caused by defects in lysosomal function, such as inflammation,” says H. Robert Horvitz, the David H. Koch Professor of Biology at MIT, a member of the McGovern Institute for Brain Research and the Koch Institute for Integrative Cancer Research, and the senior author of the study.

Mutations in this gene, known as C9orf72, have also been linked to another neurodegenerative brain disorder known as frontotemporal dementia (FTD), which is estimated to affect about 60,000 people in the United States.

“ALS and FTD are now thought to be aspects of the same disease, with different presentations. There are genes that when mutated cause only ALS, and others that cause only FTD, but there are a number of other genes in which mutations can cause either ALS or FTD or a mixture of the two,” says Anna Corrionero, an MIT postdoc and the lead author of the paper, which appears in the May 3 issue of the journal Current Biology.

Genetic link

Scientists have identified dozens of genes linked to familial ALS, which occurs when two or more family members suffer from the disease. Doctors believe that genetics may also be a factor in nonfamilial cases of the disease, which are much more common, accounting for 90 percent of cases.

Of all ALS-linked mutations identified so far, the C9orf72 mutation is the most prevalent, and it is also found in about 25 percent of frontotemporal dementia patients. The MIT team set out to study the gene’s function in C. elegans, which has an equivalent gene known as alfa-1.

In studies of worms that lack alfa-1, the researchers discovered that defects became apparent early in embryonic development. C. elegans embryos have a yolk that helps to sustain them before they hatch, and in embryos missing alfa-1, the researchers found “blobs” of yolk floating in the fluid surrounding the embryos.

This led the researchers to discover that the gene mutation was affecting the lysosomal degradation of yolk once it is absorbed into the cells. Lysosomes, which also remove cellular waste products, are cell structures which carry enzymes that can break down many kinds of molecules.

When lysosomes degrade their contents — such as yolk — they are reformed into tubular structures that split, after which they are able to degrade other materials. The MIT team found that in cells with the alfa-1 mutation and impaired lysosomal degradation, lysosomes were unable to reform and could not be used again, disrupting the cell’s waste removal process.

“It seems that lysosomes do not reform as they should, and material accumulates in the cells,” Corrionero says.

For C. elegans embryos, that meant that they could not properly absorb the nutrients found in yolk, which made it harder for them to survive under starvation conditions. The embryos that did survive appeared to be normal, the researchers say.

Robert Brown, chair of the Department of Neurology at the University of Massachusetts Medical School, describes the study as a major contribution to scientists’ understanding of the normal function of the C9orf72 gene.

“They used the power of worm genetics to dissect very fully the stages of vesicle maturation at which this gene seems to play a major role,” says Brown, who was not involved in the study.

Neuronal effects

The researchers were able to partially reverse the effects of alfa-1 loss in the C. elegans embryos by expressing the human protein encoded by the C9orf72 gene. “This suggests that the worm and human proteins are performing the same molecular function,” Corrionero says.

If loss of C9orf72 affects lysosome function in human neurons, it could lead to a slow, gradual buildup of waste products in those cells. ALS usually affects cells of the motor cortex, which controls movement, and motor neurons in the spinal cord, while frontotemporal dementia affects the frontal areas of the brain’s cortex.

“If you cannot degrade things properly in cells that live for very long periods of time, like neurons, that might well affect the survival of the cells and lead to disease,” Corrionero says.

Many pharmaceutical companies are now researching drugs that would block the expression of the mutant C9orf72. The new study suggests certain possible side effects to watch for in studies of such drugs.

“If you generate drugs that decrease C9orf72 expression, you might cause problems in lysosomal homeostasis,” Corrionero says. “In developing any drug, you have to be careful to watch for possible side effects. Our observations suggest some things to look for in studying drugs that inhibit C9orf72 in ALS/FTD patients.”

The research was funded by an EMBO postdoctoral fellowship, an ALS Therapy Alliance grant, a gift from Rose and Douglas Barnard ’79 to the McGovern Institute, and a gift from the Halis Family Foundation to the MIT Aging Brain Initiative.

Viral tool traces long-term neuron activity

For the past decade, neuroscientists have been using a modified version of the rabies virus to label neurons and trace the connections between them. Although this technique has proven very useful, it has one major drawback: The virus is toxic to cells and can’t be used for studies longer than about two weeks.

Researchers at MIT and the Allen Institute for Brain Science have now developed a new version of this virus that stops replicating once it infects a cell, allowing it to deliver its genetic cargo without harming the cell. Using this technique, scientists should be able to study the infected neurons for several months, enabling longer-term studies of neuron functions and connections.

“With the first-generation vectors, the virus is replicating like crazy in the infected neurons, and that’s not good for them,” says Ian Wickersham, a principal research scientist at MIT’s McGovern Institute for Brain Research and the senior author of the new study. “With the second generation, infected cells look normal and act normal for at least four months — which was as long as we tracked them — and probably for the lifetime of the animal.”

Soumya Chatterjee of the Allen Institute is the lead author of the paper, which appears in the March 5 issue of Nature Neuroscience.

Viral tracing

Rabies viruses are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through junctions known as synapses. The viruses can also spread from the terminals of axons back to the cell body of the same neuron. Neuroscientists can engineer the viruses to carry genes for fluorescent proteins, which are useful for imaging, or for light-sensitive proteins that can be used to manipulate neuron activity.

In 2007, Wickersham demonstrated that a modified version of the rabies virus could be used to trace synapses between only directly connected neurons. Before that, researchers had been using the rabies virus for similar studies, but they were unable to keep it from spreading throughout the entire brain.

By deleting one of the virus’ five genes, which codes for a glycoprotein normally found on the surface of infected cells, Wickersham was able to create a version that can only spread to neurons in direct contact with the initially infected cell. This 2007 modification enabled scientists to perform “monosynaptic tracing,” a technique that allows them to identify connections between the infected neuron and any neuron that provides input to it.

This first generation of the modified rabies virus is also used for a related technique known as retrograde targeting, in which the virus can be injected into a cluster of axon terminals and then travel back to the cell bodies of those axons. This can help researchers discover the location of neurons that send impulses to the site of the virus injection.

Researchers at MIT have used retrograde targeting to identify populations of neurons of the basolateral amygdala that project to either the nucleus accumbens or the central medial amygdala. In that type of study, researchers can deliver optogenetic proteins that allow them to manipulate the activity of each population of cells. By selectively stimulating or shutting off these two separate cell populations, researchers can determine their functions.

Reduced toxicity

To create the second-generation version of this viral tool, Wickersham and his colleagues deleted the gene for the polymerase enzyme, which is necessary for transcribing viral genes. Without this gene, the virus becomes less harmful and infected cells can survive much longer. In the new study, the researchers found that neurons were still functioning normally for up to four months after infection.

“The second-generation virus enters a cell with its own few copies of the polymerase protein and is able to start transcribing its genes, including the transgene that we put into it. But then because it’s not able to make more copies of the polymerase, it doesn’t have this exponential takeover of the cell, and in practice it seems to be totally nontoxic,” Wickersham says.

The lack of polymerase also greatly reduces the expression of whichever gene the researchers engineer into the virus, so they need to employ a little extra genetic trickery to achieve their desired outcome. Instead of having the virus deliver a gene for a fluorescent or optogenetic protein, they engineer it to deliver a gene for an enzyme called Cre recombinase, which can delete target DNA sequences in the host cell’s genome.

This virus can then be used to study neurons in mice whose genomes have been engineered to include a gene that is turned on when the recombinase cuts out a small segment of DNA. Only a small amount of recombinase enzyme is needed to turn on the target gene, which could code for a fluorescent protein or another type of labeling molecule. The second-generation viruses can also work in regular mice if the researchers simultaneously inject another virus carrying a recombinase-activated gene for a fluorescent protein.

The new paper shows that the second-generation virus works well for retrograde labeling, not tracing synapses between cells, but the researchers have also now begun using it for monosynaptic tracing.

The research was funded by the National Institute of Mental Health, the National Institute on Aging, and the National Eye Institute.

Seeing the brain’s electrical activity

Neurons in the brain communicate via rapid electrical impulses that allow the brain to coordinate behavior, sensation, thoughts, and emotion. Scientists who want to study this electrical activity usually measure these signals with electrodes inserted into the brain, a task that is notoriously difficult and time-consuming.

MIT researchers have now come up with a completely different approach to measuring electrical activity in the brain, which they believe will prove much easier and more informative. They have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. “Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other.”

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar, is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology. The paper’s lead authors are MIT postdocs Kiryl Piatkevich and Erica Jung.

Imaging voltage

For the past two decades, scientists have sought a way to monitor electrical activity in the brain through imaging instead of recording with electrodes. Finding fluorescent molecules that can be used for this kind of imaging has been difficult; not only do the proteins have to be very sensitive to changes in voltage, they must also respond quickly and be resistant to photobleaching (fading that can be caused by exposure to light).

Boyden and his colleagues came up with a new strategy for finding a molecule that would fulfill everything on this wish list: They built a robot that could screen millions of proteins, generated through a process called directed protein evolution, for the traits they wanted.

“You take a gene, then you make millions and millions of mutant genes, and finally you pick the ones that work the best,” Boyden says. “That’s the way that evolution works in nature, but now we’re doing it in the lab with robots so we can pick out the genes with the properties we want.”

The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2, which was previously engineered by Adam Cohen’s lab at Harvard University. (That work, in turn, was based on the molecule Arch, which the Boyden lab reported in 2010.) The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness.

The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to one top performer, which they called Archon1.

Mapping the brain

A key feature of Archon1 is that once the gene is delivered into a cell, the Archon1 protein embeds itself into the cell membrane, which is the best place to obtain an accurate measurement of a cell’s voltage.

Using this protein, the researchers were able to measure electrical activity in mouse brain tissue, as well as in brain cells of zebrafish larvae and the worm Caenorhabditis elegans. The latter two organisms are transparent, so it is easy to expose them to light and image the resulting fluorescence. When the cells are exposed to a certain wavelength of reddish-orange light, the protein sensor emits a longer wavelength of red light, and the brightness of the light corresponds to the voltage of that cell at a given moment in time.

The researchers also showed that Archon1 can be used in conjunction with light-sensitive proteins that are commonly used to silence or stimulate neuron activity — these are known as optogenetic proteins — as long as those proteins respond to colors other than red. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Cohen, the Harvard professor who developed the predecessor to Archon1, says the new MIT protein brings scientists closer to the goal of imaging millisecond-timescale electrical activity in live brains.

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement. The Boyden lab developed a very clever high-throughput screening approach to this problem,” says Cohen, who was not involved in this study. “Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

The researchers are now working on using this technology to measure brain activity in mice as they perform various tasks, which Boyden believes should allow them to map neural circuits and discover how they produce specific behaviors.

“We will be able to watch a neural computation happen,” he says. “Over the next five years or so we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

Study reveals molecular mechanisms of memory formation

MIT neuroscientists have uncovered a cellular pathway that allows specific synapses to become stronger during memory formation. The findings provide the first glimpse of the molecular mechanism by which long-term memories are encoded in a region of the hippocampus called CA3.

The researchers found that a protein called Npas4, previously identified as a master controller of gene expression triggered by neuronal activity, controls the strength of connections between neurons in the CA3 and those in another part of the hippocampus called the dentate gyrus. Without Npas4, long-term memories cannot form.

“Our study identifies an experience-dependent synaptic mechanism for memory encoding in CA3, and provides the first evidence for a molecular pathway that selectively controls it,” says Yingxi Lin, an associate professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Lin is the senior author of the study, which appears in the Feb. 8 issue of Neuron. The paper’s lead author is McGovern Institute research scientist Feng-Ju (Eddie) Weng.

Synaptic strength

Neuroscientists have long known that the brain encodes memories by altering the strength of synapses, or connections between neurons. This requires interactions of many proteins found in both presynaptic neurons, which send information about an event, and postsynaptic neurons, which receive the information.

Neurons in the CA3 region play a critical role in the formation of contextual memories, which are memories that link an event with the location where it took place, or with other contextual information such as timing or emotions. These neurons receive synaptic inputs from three different pathways, and scientists have hypothesized that one of these inputs, from the dentate gyrus, is critical for encoding new contextual memories. However, the mechanism of how this information is encoded was not known.

In a study published in 2011, Lin and colleagues found that Npas4, a gene that is turned on immediately following new experiences, appears to act as a master controller of the program of gene expression required for long-term memory formation. They also found that Npas4 is most active in the CA3 region of the hippocampus during learning. This activity was already known to be required for fast contextual learning, such is required during a type of task known as contextual fear conditioning. During the conditioning, mice receive a mild electric shock when they enter and explore a specific chamber. Within minutes, the mice learn to fear the chamber, and the next time they enter it, they freeze.

When the researchers knocked out the Npas4 gene, they found that mice could not remember the fearful event. They also found the same effect when they knocked out the gene just in the CA3 region of the hippocampus. Knocking it out in other parts of the hippocampus, however, had no effect on memory.

In the new study, the researchers explored in further detail how Npas4 exerts its effects. Lin’s lab had previously developed a method that makes it possible to fluorescently label CA3 neurons that are activated during this fear conditioning. Using the same fear conditioning process, the researchers showed that during learning, certain synaptic inputs to CA3 neurons are strengthened, but not others. Furthermore, this strengthening requires Npas4.

The inputs that are selectively strengthened come from another part of the hippocampus called the dentate gyrus. These signals convey information about the location where the fearful experience took place.

Without Npas4, synapses coming from the dentate gyrus to CA3 failed to strengthen, and the mice could not form memories of the event. Further experiments revealed that this strengthening is required specifically for memory encoding, not for retrieving memories already formed. The researchers also found that Npas4 loss did not affect synaptic inputs that CA3 neurons receive from other sources.

Kimberly Raab-Graham, an associate professor of physiology and pharmacology at Wake Forest University School of Medicine, says the researchers used an impressive variety of techniques to unequivocally show that contextual memory formation is tightly controlled by Npas4.

“The major finding of the study is that contextual memory is driven by a single circuit and comes down to a single transcription factor,” says Raab-Graham, who was not involved in the study. “When they knocked out the transcription factor, they removed contextual memory formation, and they could restore it by adding the transcription factor.”

Synapse maintenance

The researchers also identified one of the genes that Npas4 controls to exert this effect on synapse strength. This gene, known as plk2, is involved in shrinking postsynaptic structures. Npas4 turns on plk2, thereby reducing synapse size and strength. This suggests that Npas4 itself does not strengthen synapses, but maintains synapses in a state that allows them to be strengthened when necessary. Without Npas4, synapses become too strong and therefore cannot be induced to encode memories by further strengthening them.

“When you take out Npas4, the synaptic strength is almost saturated,” Lin says. “And then when learning takes place, although the memory-encoding cells can be fluorescently labeled, you no longer see the strengthening of those connections.”

In future work, Lin hopes to study how the circuit connecting the dentate gyrus to CA3 interacts with other pathways required for memory retrieval. “Somehow there’s some crosstalk between different pathways so that once the information is stored, it can be retrieved by the other inputs,” she says.

The research was funded by the National Institutes of Health, the James H. Ferry Fund, and a Swedish Brain Foundation Research Fellowship.

Listening to neurons

When McGovern Investigator Mark Harnett gets a text from his collaborator at Massachusetts General Hospital, it’s time to stock up on Red Bull and coffee.

Because very soon—sometimes within a few hours—a chunk of living human brain will arrive at the lab, marking the start of an epic session recording the brain’s internal dialogue. And it continues non-stop until the neurons die.

“That first time, we went for 54 hours straight,” Harnett says.

Now two years old, his lab is trying to answer fundamental questions about how the brain’s basic calculations lead to the experience of daily life. Most neuroscientists consider the neuron to be the brain’s basic computational unit, but Harnett is focusing on the internal workings of individual neurons, and in particular, the role of dendrites, the elaborate branching structures that are the most distinctive feature of these cells.

Years ago, scientists viewed dendrites as essentially passive structures, receiving neurochemical information that they translated into electrical signals and sent to the cell body, or soma. The soma was the calculator, summing up the data and deciding whether or not to produce an output signal, known as an action potential. Now though, evidence has accumulated showing dendrites to be capable of processing information themselves, leading to a new and more expansive view in which each individual neuron contains multiple computational elements.

Due to the enormous technical challenge such work demands, however, scientists still don’t fully understand the biophysical mechanisms behind dendritic computations.

They understand even less how these mechanisms operate in and contribute to an awake, thinking brain—nor how much the mouse models that have defined the field translate to the vastly more powerful computational abilities of the human brain.

Harnett is in an ideal position to untangle some of these questions, owing to a rare combination of the technology and skills needed to record from dendrites—a feat in itself—as well as access to animals and human tissue, and a lab eager for a challenge.

Human interest

Most previous research on dendrites has been done in rats or mice, and Harnett’s collaboration with MGH addresses a deceptively simple question: are the brain cells of rodents really equivalent to those of humans?

Researchers have generally assumed that they are similar, but no one has studied the question in depth. It is known, however, that human dendrites are longer and more structurally complex, and Harnett suspects that these shape differences may reflect the existence of additional computational mechanisms.

To investigate this question, Harnett reached out to Sydney Cash, a neurologist at MGH and Harvard Medical School. Cash was intrigued. He’d been studying epilepsy patients with electrodes implanted in their brains to locate seizures before brain surgery, and he was seeing odd quirks in his data. The neurons seemed to be more connected than animal data would suggest, but he had no way to investigate. “And so I thought this collaboration would be fantastic,” he says. “The amazing electrophysiology that Mark’s group can do would be able to give us that insight into the behavior of these individual human neurons.”

So Cash arranged for Harnett to receive tissue from the brains of patients undergoing lobe resections—removal of chunks of tissue associated with seizures, which often works for patients for whom other treatments have failed.

Logistics were challenging—how to get a living piece of brain from one side of the Charles River to the other before it dies? Harnett initially wanted to use a drone; the legal department shot down that idea. Then he wanted to preserve the delicate tissue in bubbling oxygenated solution. But carting cylinders of hazardous compressed gas around the city was also a non-starter. “So, on the first one, we said to heck with it, we’ll just see if it works at all,” Harnett says. “We threw the brain into a bottle of ice-cold solution, screwed the top on, and told an Uber driver to go fast.”

When the cargo reaches the lab, the team starts the experiments immediately to collect as much data as possible before the neurons fail. This process involves the kind of arduous work that Harnett’s first graduate student, Lou Beaulieu-Laroche, relishes. Indeed, it’s why the young Quebecois wanted to join Harnett’s lab in the first place. “Every time I get to do this recording, I get so excited I don’t even need to sleep,” he says.

First, Beaulieu-Laroche places the precious tissue into a nutrient solution, carefully slicing it at the correct angle to reveal the neurons of interest. Then he begins patch clamp recordings, placing a tiny glass pipette to the surface of a single neuron in order to record its electrical activity. Most labs patch the larger soma; few can successfully patch the far finer dendrites. Beaulieu-Laroche can record two locations on a single dendrite simultaneously.

“It’s tricky experiment on top of tricky experiment,” Harnett says. “If you don’t succeed at every step, you get nothing out of it.” But do it right, and it’s a human neuron laid bare, whirring calculations visible in real-time.

The lab has collected samples from just seven surgeries so far, but a fascinating picture is emerging. For instance, spikes of activity in some human dendrites don’t seem to show up in the main part of the cell, a peculiar decoupling mice don’t show. What it means is still unclear, but it may be a sign of Harnett’s theorized intermediary computations between the distant dendrites and the cell body.

“It could be that the dendrite network of a human neuron is a little more complicated—maybe a little bit smarter,” Beaulieu-Laroche speculates. “And maybe that contributes to our intelligence.”

Active questioning

The human work is inherently limited to studying cells in a dish, and that gets to Harnett’s real focus. “A huge amount of time and effort has been spent identifying what dendrites are capable of doing in brain slices,” he says. Far less effort has gone into studying what they do in the behaving brain. It’s like exhaustively examining a set of tires on a car without ever testing its performance on the road.

To get at this problem, Harnett studies spatial navigation in mice, a task that requires the mouse brain to combine information about vision, motion, and self-orientation into a holistic experience. Scientists don’t know how this integration happens, but Harnett thinks it is an ideal test bed for exploring how dendritic processes contribute to complex behavioral computations. “We know the different types of information must eventually converge, but we think each type could be processed separately in the dendrites before being combined in the cell body,” he says.

The difficult part is catching neurons in the act of computing. This requires a two-pronged approach combining finegrained dendritic biophysics—like what Beaulieu-Laroche does in human cells— with behavioral studies and imaging in awake mice.

Marie-Sophie van der Goes, Harnett’s second graduate student, took up the challenge when she joined the lab in early 2016. From previous work, she knew spatial integration happened in a structure called the retrosplenial cortex, but the region was not well studied.

“We didn’t know where the information entering the RSC came from, or how it was organized,” she explains.

She and laboratory technician Derrick Barnagian used reverse tracing methods to identify inputs to the RSC, and teamed up with postdoc Mathieu Lafourcade to figure out how that information was organized and processed. Vision, motor and orientation systems are all connected to the region, as expected, but the inputs are segregated, with visual and motor information, for example, arriving at different locations within the dendritic tree. According to the patch clamp data, this is likely to be very important, since different dendrites appear to process information in different ways.

The next step for Van der Goes will be to record from neurons as mice perform a navigation task in a virtual maze. Two other postdocs, Jakob Voigts and Lukas Fischer, have already begun looking at similar questions. Working with mice genetically engineered so that their neurons light up when activated, the researchers implant a small glass window in the skull, directly over the RSC. Peering in with a two-photon microscope, they can watch, in real time, the activity of individual neurons and dendrites, as the animal processes different stimuli, including visual cues, sugar-water reward, and the sensation of its feet running along the ground.

It’s not a perfect system; the mouse’s head has to be held absolutely still for the scope to work. For now, they use a virtual reality maze and treadmill, although thanks to an ingenious rig Voigts invented, the set-up is poised to undergo a key improvement to make it feel more life-like for the mouse, and thus more accurate for the researchers.

Human questions

As much as the lab has accomplished so far, Harnett considers the people his greatest achievement. “Lab culture’s critical, in my opinion,” Harnett says. “How it manifests can really affect who wants to join your particular pirate crew.”

And his lab, he says, “is a wonderful environment and my team is incredibly successful in getting hard things to work.”

Everyone works on each other’s projects, coming in on Friday nights and weekend mornings, while ongoing jokes, lab memes, and shared meals bind the team together. Even Harnett prefers to bring his laptop to the crowded student and postdoc office rather than work in his own spacious quarters. With only three Americans in the lab—including Harnett —the space is rich in languages and friendly jabs. Canadian Beaulieu-Laroche says France-born Lafourcade speaks French like his grandmother; Lafourcade insists he speaks the best French—and the best Spanish. “But the Germans never speak German,” he wonders.

And there’s another uniting factor as well—a passion for asking big questions in life. Perhaps it is because many of the lab members are internationally educated and have studied more philosophy and literature than a typical science student. “Marie randomly dropped a Marcus Aurelius quote on me the other day,” Harnett says. He’d been flabbergasted, “But then I wondered, what is it about the fact that they’ve ended up here and we work together so incredibly well? I think it’s that we all think about this stuff—it gives us a shared humanism in the laboratory.”

Next-generation optogenetic molecules control single neurons

Researchers at MIT and Paris Descartes University have developed a new optogenetic technique that sculpts light to target individual cells bearing engineered light-sensitive molecules, so that individual neurons can be precisely stimulated.

Until now, it has been challenging to use optogenetics to target single cells with such precise control over both the timing and location of the activation. This new advance paves the way for studies of how individual cells, and connections among those cells, generate specific behaviors such as initiating a movement or learning a new skill.

“Ideally what you would like to do is play the brain like a piano. You would want to control neurons independently, rather than having them all march in lockstep the way traditional optogenetics works, but which normally the brain doesn’t do,” says Ed Boyden, an associate professor of brain and cognitive sciences and biological engineering at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

The new technique relies on a new type of light-sensitive protein that can be embedded in neuron cell bodies, combined with holographic light-shaping that can focus light on a single cell.

Boyden and Valentina Emiliani, a research director at France’s National Center for Scientific Research (CNRS) and director of the Neurophotonics Laboratory at Paris Descartes University, are the senior authors of the study, which appears in the Nov. 13 issue of Nature Neuroscience. The lead authors are MIT postdoc Or Shemesh and CNRS postdocs Dimitrii Tanese and Valeria Zampini.

Precise control

More than 10 years ago, Boyden and his collaborators first pioneered the use of light-sensitive proteins known as microbial opsins to manipulate neuron electrical activity. These opsins can be embedded into the membranes of neurons, and when they are exposed to certain wavelengths of light, they silence or stimulate the cells.

Over the past decade, scientists have used this technique to study how populations of neurons behave during brain tasks such as memory recall or habit formation. Traditionally, many cells are targeted simultaneously because the light shining into the brain strikes a relatively large area. However, as Boyden points out, neurons may have different functions even when they are near each other.

“Two adjacent cells can have completely different neural codes. They can do completely different things, respond to different stimuli, and play different activity patterns during different tasks,” he says.

To achieve independent control of single cells, the researchers combined two new advances: a localized, more powerful opsin and an optimized holographic light-shaping microscope.

For the opsin, the researchers used a protein called CoChR, which the Boyden lab discovered in 2014. They chose this molecule because it generates a very strong electric current in response to light (about 10 times stronger than that produced by channelrhodopsin-2, the first protein used for optogenetics).

They fused CoChR to a small protein that directs the opsin into the cell bodies of neurons and away from axons and dendrites, which extend from the neuron body. This helps to prevent crosstalk between neurons, since light that activates one neuron can also strike axons and dendrites of other neurons that intertwine with the target neuron.

Boyden then worked with Emiliani to combine this approach with a light-stimulation technique that she had previously developed, known as two-photon computer-generated holography (CGH). This can be used to create three-dimensional sculptures of light that envelop a target cell.

Traditional holography is based on reproducing, with light, the shape of a specific object, in the absence of that original object. This is achieved by creating an “interferogram” that contains the information needed to reconstruct an object that was previously illuminated by a reference beam. In computer generated holography, the interferogram is calculated by a computer without the need of any original object. Years ago, Emiliani’s research group demonstrated that combined with two-photon excitation, CGH can be used to refocus laser light to precisely illuminate a cell or a defined group of cells in the brain.

In the new study, by combining this approach with new opsins that cluster in the cell body, the researchers showed they could stimulate individual neurons with not only precise spatial control but also great control over the timing of the stimulation. When they target a specific neuron, it responds consistently every time, with variability that is less than one millisecond, even when the cell is stimulated many times in a row.

“For the first time ever, we can bring the precision of single-cell control toward the natural timescales of neural computation,” Boyden says.

Mapping connections

Using this technique, the researchers were able to stimulate single neurons in brain slices and then measure the responses from cells that are connected to that cell. This paves the way for possible diagramming of the connections of the brain, and analyzing how those connections change in real time as the brain performs a task or learns a new skill.

One possible experiment, Boyden says, would be to stimulate neurons connected to each other to try to figure out if one is controlling the others or if they are all receiving input from a far-off controller.

“It’s an open question,” he says. “Is a given function being driven from afar, or is there a local circuit that governs the dynamics and spells out the exact chain of command within a circuit? If you can catch that chain of command in action and then use this technology to prove that that’s actually a causal link of events, that could help you explain how a sensation, or movement, or decision occurs.”

As a step toward that type of study, the researchers now plan to extend this approach into living animals. They are also working on improving their targeting molecules and developing high-current opsins that can silence neuron activity.

Kirill Volynski, a professor at the Institute of Neurology at University College London, who was not involved in the research, plans to use the new technology in his studies of diseases caused by mutations of proteins involved in synaptic communication between neurons.

“This gives us a very nice tool to study those mutations and those disorders,” Volynski says. “We expect this to enable a major improvement in the specificity of stimulating neurons that have mutated synaptic proteins.”

The research was funded by the National Institutes of Health, France’s National Research Agency, the Simons Foundation for the Social Brain, the Human Frontiers Science Program, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the Defense Advanced Research Projects Agency.

Robotic system monitors specific neurons

Recording electrical signals from inside a neuron in the living brain can reveal a great deal of information about that neuron’s function and how it coordinates with other cells in the brain. However, performing this kind of recording is extremely difficult, so only a handful of neuroscience labs around the world do it.

To make this technique more widely available, MIT engineers have now devised a way to automate the process, using a computer algorithm that analyzes microscope images and guides a robotic arm to the target cell.

This technology could allow more scientists to study single neurons and learn how they interact with other cells to enable cognition, sensory perception, and other brain functions. Researchers could also use it to learn more about how neural circuits are affected by brain disorders.

“Knowing how neurons communicate is fundamental to basic and clinical neuroscience. Our hope is this technology will allow you to look at what’s happening inside a cell, in terms of neural computation, or in a disease state,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Boyden is the senior author of the paper, which appears in the Aug. 30 issue of Neuron. The paper’s lead author is MIT graduate student Ho-Jun Suk.

Precision guidance

For more than 30 years, neuroscientists have been using a technique known as patch clamping to record the electrical activity of cells. This method, which involves bringing a tiny, hollow glass pipette in contact with the cell membrane of a neuron, then opening up a small pore in the membrane, usually takes a graduate student or postdoc several months to learn. Learning to perform this on neurons in the living mammalian brain is even more difficult.

There are two types of patch clamping: a “blind” (not image-guided) method, which is limited because researchers cannot see where the cells are and can only record from whatever cell the pipette encounters first, and an image-guided version that allows a specific cell to be targeted.

Five years ago, Boyden and colleagues at MIT and Georgia Tech, including co-author Craig Forest, devised a way to automate the blind version of patch clamping. They created a computer algorithm that could guide the pipette to a cell based on measurements of a property called electrical impedance — which reflects how difficult it is for electricity to flow out of the pipette. If there are no cells around, electricity flows and impedance is low. When the tip hits a cell, electricity can’t flow as well and impedance goes up.

Once the pipette detects a cell, it can stop moving instantly, preventing it from poking through the membrane. A vacuum pump then applies suction to form a seal with the cell’s membrane. Then, the electrode can break through the membrane to record the cell’s internal electrical activity.

The researchers achieved very high accuracy using this technique, but it still could not be used to target a specific cell. For most studies, neuroscientists have a particular cell type they would like to learn about, Boyden says.

“It might be a cell that is compromised in autism, or is altered in schizophrenia, or a cell that is active when a memory is stored. That’s the cell that you want to know about,” he says. “You don’t want to patch a thousand cells until you find the one that is interesting.”

To enable this kind of precise targeting, the researchers set out to automate image-guided patch clamping. This technique is difficult to perform manually because, although the scientist can see the target neuron and the pipette through a microscope, he or she must compensate for the fact that nearby cells will move as the pipette enters the brain.

“It’s almost like trying to hit a moving target inside the brain, which is a delicate tissue,” Suk says. “For machines it’s easier because they can keep track of where the cell is, they can automatically move the focus of the microscope, and they can automatically move the pipette.”

By combining several imaging processing techniques, the researchers came up with an algorithm that guides the pipette to within about 25 microns of the target cell. At that point, the system begins to rely on a combination of imagery and impedance, which is more accurate at detecting contact between the pipette and the target cell than either signal alone.

The researchers imaged the cells with two-photon microscopy, a commonly used technique that uses a pulsed laser to send infrared light into the brain, lighting up cells that have been engineered to express a fluorescent protein.

Using this automated approach, the researchers were able to successfully target and record from two types of cells — a class of interneurons, which relay messages between other neurons, and a set of excitatory neurons known as pyramidal cells. They achieved a success rate of about 20 percent, which is comparable to the performance of highly trained scientists performing the process manually.

Unraveling circuits

This technology paves the way for in-depth studies of the behavior of specific neurons, which could shed light on both their normal functions and how they go awry in diseases such as Alzheimer’s or schizophrenia. For example, the interneurons that the researchers studied in this paper have been previously linked with Alzheimer’s. In a recent study of mice, led by Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and conducted in collaboration with Boyden, it was reported that inducing a specific frequency of brain wave oscillation in interneurons in the hippocampus could help to clear amyloid plaques similar to those found in Alzheimer’s patients.

“You really would love to know what’s happening in those cells,” Boyden says. “Are they signaling to specific downstream cells, which then contribute to the therapeutic result? The brain is a circuit, and to understand how a circuit works, you have to be able to monitor the components of the circuit while they are in action.”

This technique could also enable studies of fundamental questions in neuroscience, such as how individual neurons interact with each other as the brain makes a decision or recalls a memory.

Bernardo Sabatini, a professor of neurobiology at Harvard Medical School, says he is interested in adapting this technique to use in his lab, where students spend a great deal of time recording electrical activity from neurons growing in a lab dish.

“It’s silly to have amazingly intelligent students doing tedious tasks that could be done by robots,” says Sabatini, who was not involved in this study. “I would be happy to have robots do more of the experimentation so we can focus on the design and interpretation of the experiments.”

To help other labs adopt the new technology, the researchers plan to put the details of their approach on their web site, autopatcher.org.

Other co-authors include Ingrid van Welie, Suhasa Kodandaramaiah, and Brian Allen. The research was funded by Jeremy and Joyce Wertheimer, the National Institutes of Health (including the NIH Single Cell Initiative and the NIH Director’s Pioneer Award), the HHMI-Simons Faculty Scholars Program, and the New York Stem Cell Foundation-Robertson Award.

Microscopy technique could enable more informative biopsies

MIT and Harvard Medical School researchers have devised a way to image biopsy samples with much higher resolution — an advance that could help doctors develop more accurate and inexpensive diagnostic tests.

For more than 100 years, conventional light microscopes have been vital tools for pathology. However, fine-scale details of cells cannot be seen with these scopes. The new technique relies on an approach known as expansion microscopy, developed originally in Edward Boyden’s lab at MIT, in which the researchers expand a tissue sample to 100 times its original volume before imaging it.

This expansion allows researchers to see features with a conventional light microscope that ordinarily could be seen only with an expensive, high-resolution electron microscope. It also reveals additional molecular information that the electron microscope cannot provide.

“It’s a technique that could have very broad application,” says Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. He is also a member of MIT’s Media Lab and McGovern Institute for Brain Research, and an HHMI-Simons Faculty Scholar.

In a paper appearing in the 17 July issue of Nature Biotechnology, Boyden and his colleagues used this technique to distinguish early-stage breast lesions with high or low risk of progressing to cancer — a task that is challenging for human observers. This approach can also be applied to other diseases: In an analysis of kidney tissue, the researchers found that images of expanded samples revealed signs of kidney disease that can normally only be seen with an electron microscope.

“Using expansion microscopy, we are able to diagnose diseases that were previously impossible to diagnose with a conventional light microscope,” says Octavian Bucur, an instructor at Harvard Medical School, Beth Israel Deaconess Medical Center (BIDMC), and the Ludwig Center at Harvard, and one of the paper’s lead authors.

MIT postdoc Yongxin Zhao is the paper’s co-lead author. Boyden and Andrew Beck, a former associate professor at Harvard Medical School and BIDMC, are the paper’s senior authors.


“A few chemicals and a light microscope”

Boyden’s original expansion microscopy technique is based on embedding tissue samples in a dense, evenly generated polymer that swells when water is added. Before the swelling occurs, the researchers anchor to the polymer gel the molecules that they want to image, and they digest other proteins that normally hold tissue together.

This tissue enlargement allows researchers to obtain images with a resolution of around 70 nanometers, which was previously possible only with very specialized and expensive microscopes.

In the new study, the researchers set out to adapt the expansion process for biopsy tissue samples, which are usually embedded in paraffin wax, flash frozen, or stained with a chemical that makes cellular structures more visible.

The MIT/Harvard team devised a process to convert these samples into a state suitable for expansion. For example, they remove the chemical stain or paraffin by exposing the tissues to a chemical solvent called xylene. Then, they heat up the sample in another chemical called citrate. After that, the tissues go through an expansion process similar to the original version of the technique, but with stronger digestion steps to compensate for the strong chemical fixation of the samples.

During this procedure, the researchers can also add fluorescent labels for molecules of interest, including proteins that mark particular types of cells, or DNA or RNA with a specific sequence.

“The work of Zhao et al. describes a very clever way of extending the resolution of light microscopy to resolve detail beyond that seen with conventional methods,” says David Rimm, a professor of pathology at the Yale University School of Medicine, who was not involved in the research.

The researchers tested this approach on tissue samples from patients with early-stage breast lesions. One way to predict whether these lesions will become malignant is to evaluate the appearance of the cells’ nuclei. Benign lesions with atypical nuclei have about a fivefold higher probability of progressing to cancer than those with typical nuclei.

However, studies have revealed significant discrepancies between the assessments of nuclear atypia performed by different pathologists, which can potentially lead to an inaccurate diagnosis and unnecessary surgery. An improved system for differentiating benign lesions with atypical and typical nuclei could potentially prevent 400,000 misdiagnoses and hundreds of millions of dollars every year in the United States, according to the researchers.

After expanding the tissue samples, the MIT/Harvard team analyzed them with a machine learning algorithm that can rate the nuclei based on dozens of features, including orientation, diameter, and how much they deviate from true circularity. This algorithm was able to distinguish between lesions that were likely to become invasive and those that were not, with an accuracy of 93 percent on expanded samples compared to only 71 percent on the pre-expanded tissue.

“These two types of lesions look highly similar to the naked eye, but one has much less risk of cancer,” Zhao says.

The researchers also analyzed kidney tissue samples from patients with nephrotic syndrome, which impairs the kidneys’ ability to filter blood. In these patients, tiny finger-like projections that filter the blood are lost or damaged. These structures are spaced about 200 nanometers apart and therefore can usually be seen only with an electron microscope or expensive super resolution microscopes.

When the researchers showed the images of the expanded tissue samples to a group of scientists that included pathologists and nonpathologists, the group was able to identify the diseased tissue with 90 percent accuracy overall, compared to only 65 percent accuracy with unexpanded tissue samples.

“Now you can diagnose nephrotic kidney disease without needing an electron microscope, a very expensive machine,” Boyden says. “You can do it with a few chemicals and a light microscope.”

Uncovering patterns

Using this approach, the researchers anticipate that scientists could develop more precise diagnostics for many other diseases. To do that, scientists and doctors will need to analyze many more patient samples, allowing them to discover patterns that would be impossible to see otherwise.

“If you can expand a tissue by one-hundredfold in volume, all other things being equal, you’re getting 100 times the information,” Boyden says.

For example, researchers could distinguish cancer cells based on how many copies of a particular gene they have. Extra copies of genes such as HER2, which the researchers imaged in one part of this study, indicate a subtype of breast cancer that is eligible for specific treatments.

Scientists could also look at the architecture of the genome, or at how cell shapes change as they become cancerous and interact with other cells of the body. Another possible application is identifying proteins that are expressed specifically on the surface of cancer cells, allowing researchers to design immunotherapies that mark those cells for destruction by the patient’s immune system.

Boyden and his colleagues run training courses several times a month at MIT, where visitors can come and watch expansion microscopy techniques, and they have made their protocols available on their website. They hope that many more people will begin using this approach to study a variety of diseases.

“Cancer biopsies are just the beginning,” Boyden says. “We have a new pipeline for taking clinical samples and expanding them, and we are finding that we can apply expansion to many different diseases. Expansion will enable computational pathology to take advantage of more information in a specimen than previously possible.”

Humayun Irshad, a research fellow at Harvard/BIDMC and an author of the study, agrees: “Expanded images result in more informative features, which in turn result in higher-performing classification models.”

Other authors include Harvard pathologist Astrid Weins, who helped oversee the kidney study. Other authors from MIT (Fei Chen) and BIDMC/Harvard (Andreea Stancu, Eun-Young Oh, Marcello DiStasio, Vanda Torous, Benjamin Glass, Isaac E. Stillman, and Stuart J. Schnitt) also contributed to this study.

The research was funded, in part, by the New York Stem Cell Foundation Robertson Investigator Award, the National Institutes of Health Director’s Pioneer Award, the Department of Defense Multidisciplinary University Research Initiative, the Open Philanthropy Project, the Ludwig Center at Harvard, and Harvard Catalyst.

A Google map of the brain

At the start of the twentieth century, Santiago Ramón y Cajal’s drawings of brain cells under the microscope revealed a remarkable diversity of cell types within the brain. Through sketch after sketch, Cajal showed that the brain was not, as many believed, a web of self-similar material, but rather that it is composed of billions of cells of many different sizes, shapes, and interconnections.

Yet more than a hundred years later, we still do not know how many cell types make up the human brain. Despite decades of study, the challenge remains daunting, as the brain’s complexity has overwhelmed attempts to describe it systematically or to catalog its parts.

Now, however, this appears about to change, thanks to an explosion of new technical advances in areas ranging from DNA sequencing to microfluidics to computing and microscopy. For the first time, a parts list for the human brain appears to be within reach.

Why is this important? “Until we know all the cell types, we won’t fully understand how they are connected together,” explains McGovern Investigator Guoping Feng. “We know that the brain’s wiring is incredibly complicated, and that the connections are key to understanding how it works, but we don’t yet have the full picture. That’s what we are aiming for. It’s like making a Google map of the brain.”

Identifying the cell types is also important for understanding disease. As genetic risk factors for different disorders are identified, researchers need to know where they act within the brain, and which cell types and connections are disrupted as a result. “Once we know that, we can start to think about new therapeutic approaches,” says Feng, who is also an institute member of the Broad Institute, where he leads the neurobiology program at the Stanley Center for Psychiatric Disorders Research.

Drop by drop

In 2012, computational biologist Naomi Habib arrived from the Hebrew University of Jerusalem to join the labs of McGovern Investigator Feng Zhang and his collaborator Aviv Regev at the Broad Institute. Habib’s plan was to learn new RNA methods as they were emerging. “I wanted to use these powerful tools to understand this fascinating system that is our brain,” she says.

Her rationale was simple, at least in theory. All cells of an organism carry the same DNA instructions, but the instructions are read out differently in each cell type. Stretches of DNA corresponding to individual genes are copied, sometimes thousands of times, into RNA molecules that in turn direct the synthesis of proteins. Differences in which sequences get copied are what give cells their identities: brain cells express RNAs that encode brain proteins, while blood cells express different RNAs, and so on. A given cell can express thousands of genes, providing a molecular “fingerprint” for each cell type.

Analyzing these RNAs can provide a great deal of information about the brain, including potentially the identities of its constituent cell types. But doing this is not easy, because the different cell types are mixed together like salt and pepper within the brain. For many years, studying brain RNA meant grinding up the tissue—an approach that has been compared to studying smoothies to learn about fruit salad.

As methods improved, it became possible to study the tiny quantities of RNA contained within single cells. This opened the door to studying the difference between individual cells, but this required painstaking manipulation of many samples, a slow and laborious process.

A breakthrough came in 2015, with the development of automated methods based on microfluidics. One of these, known as dropseq (droplet-based sequencing), was pioneered by Steve McCarroll at Harvard, in collaboration with Regev’s lab at Broad. In this method, individual cells are captured in tiny water droplets suspended in oil. Vast numbers of droplets are automatically pumped through tiny channels, where each undergoes its own separate sequencing reactions. By running multiple samples in parallel, the machines can process tens of thousands of cells and billions of sequences, within hours rather than weeks or months. The power of the method became clear when in an experiment on mouse retina, the researchers were able to identify almost every cell type that had ever been described in the retina, effectively recapitulating decades of work in a single experiment.

Dropseq works well for many tissues, but Habib wanted to apply it to the adult brain, which posed a unique challenge. Mature neurons often bear elaborate branches that become intertwined like tree roots in a forest, making it impossible to separate individual cells without damage.

Nuclear option

So Habib turned to another idea. RNA is made in the nucleus before moving to the cytoplasm, and because nuclei are compact and robust it is easy to recover them intact in large numbers, even from difficult tissues such as brain. The amount of RNA contained in a single nucleus is tiny, and Habib didn’t know if it would be enough to be informative, but Zhang and Regev encouraged her to keep going. “You have to be optimistic,” she says. “You have to try.”

Fortunately, the experiment worked. In a paper with Zhang and Regev, she was able to isolate nuclei from newly formed neurons in the adult mouse hippocampus (a brain structure involved in memory), and by analyzing their RNA profiles individually she could order them in a series according to their age, revealing their developmental history from birth to maturity.

Now, after much further experimentation, Habib and her colleagues have managed to apply the droplet method to nuclei, making it possible for the first time to analyze huge numbers of cells from adult brain—at least ten times more than with previous methods.

This opens up many new avenues, including the study of human postmortem tissue, given that RNA in nuclei can survive for years in frozen samples. Habib is already starting to examine tissue taken at autopsy from patients with Alzheimer’s and other neurodegenerative diseases. “The neurons are degenerating, but the other cells around them could also be contributing to the degenerative process,” she says. “Now we have these tools, we can look at what happens during the progression of the disease.”

Computing cells

Once the sequencing is completed, the results are analyzed using sophisticated computational methods. When the results emerge, data from individual cells are visualized as colored dots, clustered on a graph according to their statistical similarities. But because the cells were dissociated at the start of the experiment, information about their appearance and origin within the brain is lost.

To find out how these abstract displays correspond to the visible cells of the brain, Habib teamed up with Yinqing Li, a former graduate student with Zhang who is now a postdoc in the lab of Guoping Feng. Li began with existing maps from the Allen Institute, a public repository with thousands of images showing expression patterns for individual genes within mouse brain. By comparing these maps with the molecular fingerprints from Habib’s nuclear RNA sequencing experiments, Li was able to make a map of where in the brain each cell was likely to have come from.

It was a good first step, but still not perfect. “What we really need,” he says, “is a method that allows us to see every RNA in individual cells. If we are studying a brain disease, we want to know which neurons are involved in the disease process, where they are, what they are connected to, and which special genes might be involved so that we can start thinking about how to design a drug that could alter the disease.”

Expanding horizons

So Li partnered with Asmamaw (Oz) Wassie, a graduate student in the lab of McGovern Investigator Ed Boyden, to tackle the problem. Wassie had previously studied bioengineering as an MIT undergraduate, where he had helped build an electronic “artificial nose” for detecting trace chemicals in air. With support from a prestigious Hertz Fellowship, he joined Boyden’s lab, where he is now working on the development of a method known as expansion microscopy.

In this method, a sample of tissue is embedded with a polymer that swells when water is added. The entire sample expands in all directions, allowing scientists to see fine details such as connections between neurons, using an ordinary microscope. Wassie recently helped develop a way to anchor RNA molecules to the polymer matrix, allowing them to be physically secured during the expansion process. Now, within the expanded samples he can see the individual molecules using a method called fluorescent in situ hybridization (FISH), in which each RNA appears as a glowing dot under the microscope. Currently, he can label only a handful of RNA types at once, but by using special sets of probes, applied sequentially, he thinks it will soon be possible to distinguish thousands of different RNA sequences.

“That will help us to see what each cell looks like, how they are connected to each other, and what RNAs they contain,” says Wassie. By combining this information with the RNA expression data generated by Li and Habib, it will be possible to reveal the organization and fine structure of complex brain areas and perhaps to identify new cell types that have not yet been recognized.

Looking ahead

Li plans to apply these methods to a brain structure known as the thalamic reticular nucleus (TRN) – a sheet of tissue, about ten neurons thick in mice, that sits on top of the thalamus and close to the cortex. The TRN is not well understood, but it is important for controlling sleep, attention and sensory processing, and it has caught the interest of Feng and other neuroscientists because it expresses a disproportionate number of genes implicated in disorders such as autism, attention deficit hyperactivity disorder, and intelligence deficits. Together with Joshua Levin’s group at Broad, Li has already used nuclear RNA sequencing to identify the cell types in the TRN, and he has begun to examine them within intact brain using the expansion techniques. “When you map these precise cell types back to the tissue, you can integrate the gene expression information with everything else, like electrophysiology, connectivity, morphology,” says Li. “Then we can start to ask what’s going wrong in disease.”

Meanwhile, Feng is already looking beyond the TRN, and planning how to scale the approach to other structures and eventually to the entire brain. He returns to the metaphor of a Google map. “Microscopic images are like satellite photos,” he says. “Now with expansion microscopy we can add another layer of information, like property boundaries and individual buildings. And knowing which RNAs are in each cell will be like seeing who lives in those buildings. I think this will completely change how we view the brain.”

A noninvasive method for deep brain stimulation

Delivering an electrical current to a part of the brain involved in movement control has proven successful in treating many Parkinson’s disease patients. This approach, known as deep brain stimulation, requires implanting electrodes in the brain — a complex procedure that carries some risk to the patient.

Now, MIT researchers, collaborating with investigators at Beth Israel Deaconess Medical Center (BIDMC) and the IT’IS Foundation, have come up with a way to stimulate regions deep within the brain using electrodes placed on the scalp. This approach could make deep brain stimulation noninvasive, less risky, less expensive, and more accessible to patients.

“Traditional deep brain stimulation requires opening the skull and implanting an electrode, which can have complications. Secondly, only a small number of people can do this kind of neurosurgery,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, and the senior author of the study, which appears in the June 1 issue of Cell.

Doctors also use deep brain stimulation to treat some patients with obsessive compulsive disorder, epilepsy, and depression, and are exploring the possibility of using it to treat other conditions such as autism. The new, noninvasive approach could make it easier to adapt deep brain stimulation to treat additional disorders, the researchers say.

“With the ability to stimulate brain structures noninvasively, we hope that we may help discover new targets for treating brain disorders,” says the paper’s lead author, Nir Grossman, a former Wellcome Trust-MIT postdoc working at MIT and BIDMC, who is now a research fellow at Imperial College London.

Deep locations

Electrodes for treating Parkinson’s disease are usually placed in the subthalamic nucleus, a lens-shaped structure located below the thalamus, deep within the brain. For many Parkinson’s patients, delivering electrical impulses in this brain region can improve symptoms, but the surgery to implant the electrodes carries risks, including brain hemorrhage and infection.

Other researchers have tried to noninvasively stimulate the brain using techniques such as transcranial magnetic stimulation (TMS), which is FDA-approved for treating depression. Since TMS is noninvasive, it has also been used in normal human subjects to study the basic science of cognition, emotion, sensation, and movement. However, using TMS to stimulate deep brain structures can also result in surface regions being strongly stimulated, resulting in modulation of multiple brain networks.

The MIT team devised a way to deliver electrical stimulation deep within the brain, via electrodes placed on the scalp, by taking advantage of a phenomenon known as temporal interference.

This strategy requires generating two high-frequency electrical currents using electrodes placed outside the brain. These fields are too fast to drive neurons. However, these currents interfere with one another in such a way that where they intersect, deep in the brain, a small region of low-frequency current is generated inside neurons. This low-frequency current can be used to drive neurons’ electrical activity, while the high-frequency current passes through surrounding tissue with no effect.

By tuning the frequency of these currents and changing the number and location of the electrodes, the researchers can control the size and location of the brain tissue that receives the low-frequency stimulation. They can target locations deep within the brain without affecting any of the surrounding brain structures. They can also steer the location of stimulation, without moving the electrodes, by altering the currents. In this way, deep targets could be stimulated, both for therapeutic use and basic science investigations.

“You can go for deep targets and spare the overlying neurons, although the spatial resolution is not yet as good as that of deep brain stimulation,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Targeted stimulation

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, and researchers in her lab tested this technique in mice and found that they could stimulate small regions deep within the brain, including the hippocampus. They were also able to shift the site of stimulation, allowing them to activate different parts of the motor cortex and prompt the mice to move their limbs, ears, or whiskers.

“We showed that we can very precisely target a brain region to elicit not just neuronal activation but behavioral responses,” says Tsai, who is an author of the paper. “I think it’s very exciting because Parkinson’s disease and other movement disorders seem to originate from a very particular region of the brain, and if you can target that, you have the potential to reverse it.”

Significantly, in the hippocampus experiments, the technique did not activate the neurons in the cortex, the region lying between the electrodes on the skull and the target deep inside the brain. The researchers also found no harmful effects in any part of the brain.

Last year, Tsai showed that using light to visually induce brain waves of a particular frequency could substantially reduce the beta amyloid plaques seen in Alzheimer’s disease, in the brains of mice. She now plans to explore whether this type of electrical stimulation could offer a new way to generate the same type of beneficial brain waves.

Other authors of the paper are MIT research scientist David Bono; former MIT postdocs Suhasa Kodandaramaiah and Andrii Rudenko; MIT postdoc Nina Dedic; MIT grad student Ho-Jun Suk; Beth Israel Deaconess Medical Center and Harvard Medical School Professor Alvaro Pascual-Leone; and IT’IS Foundation researchers Antonino Cassara, Esra Neufeld, and Niels Kuster.

The research was funded in part by the Wellcome Trust, a National Institutes of Health Director’s Pioneer Award, an NIH Director’s Transformative Research Award, the New York Stem Cell Foundation Robertson Investigator Award, the MIT Center for Brains, Minds, and Machines, Jeremy and Joyce Wertheimer, Google, a National Science Foundation Career Award, the MIT Synthetic Intelligence Project, and Harvard Catalyst: The Harvard Clinical and Translational Science Center.