MIT team enlarges brain samples, making them easier to image

Beginning with the invention of the first microscope in the late 1500s, scientists have been trying to peer into preserved cells and tissues with ever-greater magnification. The latest generation of so-called “super-resolution” microscopes can see inside cells with resolution better than 250 nanometers.

A team of researchers from MIT has now taken a novel approach to gaining such high-resolution images: Instead of making their microscopes more powerful, they have discovered a method that enlarges tissue samples by embedding them in a polymer that swells when water is added. This allows specimens to be physically magnified, and then imaged at a much higher resolution.

This technique, which uses inexpensive, commercially available chemicals and microscopes commonly found in research labs, should give many more scientists access to super-resolution imaging, the researchers say.

“Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope. You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.

Boyden is the senior author of a paper describing the new method in the Jan. 15 online edition of Science. Lead authors of the paper are graduate students Fei Chen and Paul Tillberg.

Physical magnification

Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.

“Unfortunately, in biology that’s right where things get interesting,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. Protein complexes, molecules that transport payloads in and out of cells, and other cellular activities are all organized at the nanoscale.

Scientists have come up with some “really clever tricks” to overcome this limitation, Boyden says. However, these super-resolution techniques work best with small, thin samples, and take a long time to image large samples. “If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” he says.

To achieve this, the MIT team focused its attention on the sample rather than the microscope. Their idea was to make specimens easier to image at high resolution by embedding them in an expandable polymer gel made of polyacrylate, a very absorbent material commonly found in diapers.

Before enlarging the tissue, the researchers first label the cell components or proteins that they want to examine, using an antibody that binds to the chosen targets. This antibody is linked to a fluorescent dye, as well as a chemical anchor that can attach the dye to the polyacrylate chain.

Once the tissue is labeled, the researchers add the precursor to the polyacrylate gel and heat it to form the gel. They then digest the proteins that hold the specimen together, allowing it to expand uniformly. The specimen is then washed in salt-free water to induce a 100-fold expansion in volume. Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel.

“What you’re left with is a three-dimensional, fluorescent cast of the original material. And the cast itself is swollen, unimpeded by the original biological structure,” Tillberg says.

The MIT team imaged this “cast” with commercially available confocal microscopes, commonly used for fluorescent imaging but usually limited to a resolution of hundreds of nanometers. With their enlarged samples, the researchers achieved resolution down to 70 nanometers. “The expansion microscopy process … should be compatible with many existing microscope designs and systems already in laboratories,” Chen adds.

Large tissue samples

Using this technique, the MIT team was able to image a section of brain tissue 500 by 200 by 100 microns with a standard confocal microscope. Imaging such large samples would not be feasible with other super-resolution techniques, which require minutes to image a tissue slice only 1 micron thick and are limited in their ability to image large samples by optical scattering and other aberrations.

“The exciting part is that this approach can acquire data at the same high speed per pixel as conventional microscopy, contrary to most other methods that beat the diffraction limit for microscopy, which can be 1,000 times slower per pixel,” says George Church, a professor of genetics at Harvard Medical School who was not part of the research team.

“The other methods currently have better resolution, but are harder to use, or slower,” Tillberg says. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”

The researchers envision that this technology could be very useful to scientists trying to image brain cells and map how they connect to each other across large regions.

“There are lots of biological questions where you have to understand a large structure,” Boyden says. “Especially for the brain, you have to be able to image a large volume of tissue, but also to see where all the nanoscale components are.”

While Boyden’s team is focused on the brain, other possible applications for this technique include studying tumor metastasis and angiogenesis (growth of blood vessels to nourish a tumor), or visualizing how immune cells attack specific organs during autoimmune disease.

The research was funded by the National Institutes of Health, the New York Stem Cell Foundation, Jeremy and Joyce Wertheimer, the National Science Foundation, and the Fannie and John Hertz Foundation.

In one aspect of vision, computers catch up to primate brain

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.

Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

Because these networks are based on neuroscientists’ current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.

“The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain,” says DiCarlo, who is also a member of MIT’s McGovern Institute for Brain Research.

This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper’s lead author.

Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.

Inspired by the brain

Scientists began building neural networks in the 1970s in hopes of mimicking the brain’s ability to process visual information, recognize speech, and understand language.

For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.

“Each individual element is typically a very simple mathematical expression,” Cadieu says. “But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition.”

For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.

“Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart,” Cadieu says.

The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

More processing power

Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. “That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards,” Cadieu says.

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.

At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.

Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.

“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”

DiCarlo’s lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognizing three-dimensional forms. They also hope to create models that include the feedback projections seen in the human visual system. Current networks only model the “feedforward” projections from the retina to the IT cortex, but there are 10 times as many connections that go from IT cortex back to the rest of the system.

This work was supported by the National Eye Institute, the National Science Foundation, and the Defense Advanced Research Projects Agency.

New way to turn genes on

Using a gene-editing system originally developed to delete specific genes, MIT researchers have now shown that they can reliably turn on any gene of their choosing in living cells.

This new application for the CRISPR/Cas9 gene-editing system should allow scientists to more easily determine the function of individual genes, according to Feng Zhang, the W.M. Keck Career Development Professor in Biomedical Engineering in MIT’s Departments of Brain and Cognitive Sciences and Biological Engineering, and a member of the Broad Institute and MIT’s McGovern Institute for Brain Research.

This approach also enables rapid functional screens of the entire genome, allowing scientists to identify genes involved in particular diseases. In a study published in the Dec. 10 online edition of Nature, Zhang and colleagues identified several genes that help melanoma cells become resistant to a cancer drug.

Silvana Konermann, a graduate student in Zhang’s lab, and Mark Brigham, a McGovern Institute postdoc, are the paper’s lead authors.

A new function for CRISPR

The CRISPR system relies on cellular machinery that bacteria use to defend themselves from viral infection. Researchers have previously harnessed this cellular system to create gene-editing complexes that include a DNA-cutting enzyme called Cas9 bound to a short RNA guide strand that is programmed to bind to a specific genome sequence, telling Cas9 where to make its cut.

In the past two years, scientists have developed Cas9 as a tool for turning genes off or replacing them with a different version. In the new study, Zhang and colleagues engineered the Cas9 system to turn genes on, rather than knock them out. Scientists have tried to do this before using proteins that are individually engineered to target DNA at specific sites. However, these proteins are  difficult to work with. “If you use the older generation of tools, getting the technology to do what you actually want is a project on its own,” Konermann says. “It takes a lot of time and is also quite expensive.”

There have also been attempts to use CRISPR to turn on genes by inactivating the part of the Cas9 enzyme that cuts DNA and linking Cas9 to pieces of proteins called activation domains. These domains recruit the cellular machinery necessary to begin reading copying RNA from DNA, a process known as transcription.

However, these efforts have been unable to consistently turn on gene transcription. Zhang and his colleagues, Osamu Nureki and Hiroshi Nishimasu at the University of Tokyo, decided to overhaul the CRISPR-Cas9 system based on an analysis they published earlier this year of the structure formed when Cas9 binds to the guide RNA and its target DNA. “Based on knowing its 3-D shape, we can think about how to rationally improve the system,” Zhang says.

In previous efforts, scientists had tried to attach the activation domains to either end of the Cas9 protein, with limited success. From their structural studies, the MIT team realized that two small loops of the RNA guide poke out from the Cas9 complex and could be better points of attachment because they allow the activation domains to have more flexibility in recruiting transcription machinery.

Using their revamped system, the researchers activated about a dozen genes that had proven difficult or impossible to turn on using the previous generation of Cas9 activators. Each gene showed at least a twofold boost in transcription, and for many genes, the researchers found multiple orders of magnitude increase in activation.

Genome-scale activation screening

Once the researchers had shown that the system was effective at activating genes, they created a library of 70,290 guide RNAs targeting all of the more than 20,000 genes in the human genome.

They screened this library to identify genes that confer resistance to a melanoma drug called PLX-4720. Drugs of this type work well in patients whose melanoma cells have a mutation in the BRAF gene, but cancer cells that survive the treatment can grow into new tumors, allowing the cancer to recur.

To discover the genes that help cells become resistant, the researchers delivered CRISPR components to a large population of melanoma cells grown in the lab, with each cell receiving a different guide RNA targeting a different gene. After treating the cells with PLX-4720, they identified several genes that helped the cells to survive — some previously known to be involved in drug resistance, as well as several novel targets.
Studies like this could help researchers discover new cancer drugs that prevent tumors from becoming resistant.

“You could start with a drug that targets the mutated BRAF along with combination therapy that targets genes that allow the cell to survive. If you target both of them at the same time, you could likely prevent the cells from developing resistance mechanisms that enable further growth despite drug treatment,” Konermann says.

Scientists have tried to do large-scale screens like this by delivering single genes carried by viruses, but that does not work with all genes.

“This new technique could allow you to sample a larger spectrum of genes that might be playing a role,” says Levi Garraway, an associate professor of medicine at Dana-Farber Cancer Institute who was not involved in the research. “This is really a technology development paper, but the tantalizing results from the drug resistance screen speak to the rich biological possibilities of this approach.”

Zhang’s lab also plans to use this technique to screen for genes that, when activated, could correct the effects of autism or neurodegenerative diseases such as Alzheimer’s. He also plans to make the necessary reagents available to academic labs that want to use them, through the Addgene repository.

The research was funded by the National Institute of Mental Health; the National Institute of Neurological Disorders and Stroke; the Keck, Searle Scholars, Klingenstein, Vallee, and Simons foundations; and Bob Metcalfe.

McGovern neuroscientists identify key role of language gene

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

The research was funded by the Nancy Lurie Marks Family Foundation, the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Wellcome Trust, the Fondation pour la Recherche Médicale and the Max Planck Society.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

The research was funded by the National Institute of Child Health and Human Development and the National Science Foundation.

Noninvasive brain control

Optogenetics, a technology that allows scientists to control brain activity by shining light on neurons, relies on light-sensitive proteins that can suppress or stimulate electrical signals within cells. This technique requires a light source to be implanted in the brain, where it can reach the cells to be controlled.

MIT engineers have now developed the first light-sensitive molecule that enables neurons to be silenced noninvasively, using a light source outside the skull. This makes it possible to do long-term studies without an implanted light source. The protein, known as Jaws, also allows a larger volume of tissue to be influenced at once.

This noninvasive approach could pave the way to using optogenetics in human patients to treat epilepsy and other neurological disorders, the researchers say, although much more testing and development is needed. Led by Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT, the researchers described the protein in the June 29 issue of Nature Neuroscience.

Optogenetics, a technique developed over the past 15 years, has become a common laboratory tool for shutting off or stimulating specific types of neurons in the brain, allowing neuroscientists to learn much more about their functions.
The neurons to be studied must be genetically engineered to produce light-sensitive proteins known as opsins, which are channels or pumps that influence electrical activity by controlling the flow of ions in or out of cells. Researchers then insert a light source, such as an optical fiber, into the brain to control the selected neurons.

Such implants can be difficult to insert, however, and can be incompatible with many kinds of experiments, such as studies of development, during which the brain changes size, or of neurodegenerative disorders, during which the implant can interact with brain physiology. In addition, it is difficult to perform long-term studies of chronic diseases with these implants.

Mining nature’s diversity

To find a better alternative, Boyden, graduate student Amy Chuong, and colleagues turned to the natural world. Many microbes and other organisms use opsins to detect light and react to their environment. Most of the natural opsins now used for optogenetics respond best to blue or green light.

Boyden’s team had previously identified two light-sensitive chloride ion pumps that respond to red light, which can penetrate deeper into living tissue. However, these molecules, found in the bacteria Haloarcula marismortui and Haloarcula vallismortis, did not induce a strong enough photocurrent — an electric current in response to light — to be useful in controlling neuron activity.

Chuong set out to improve the photocurrent by looking for relatives of these proteins and testing their electrical activity. She then engineered one of these relatives by making many different mutants. The result of this screen, Jaws, retained its red-light sensitivity but had a much stronger photocurrent — enough to shut down neural activity.

“This exemplifies how the genomic diversity of the natural world can yield powerful reagents that can be of use in biology and neuroscience,” says Boyden, who is a member of MIT’s Media Lab and the McGovern Institute for Brain Research.

Using this opsin, the researchers were able to shut down neuronal activity in the mouse brain with a light source outside the animal’s head. The suppression occurred as deep as 3 millimeters in the brain, and was just as effective as that of existing silencers that rely on other colors of light delivered via conventional invasive illumination.

A key advantage to this opsin is that it could enable optogenetic studies of animals with larger brains, says Garret Stuber, an assistant professor of psychiatry and cell biology and physiology at the University of North Carolina at Chapel Hill.
“In animals with larger brains, people have had difficulty getting behavior effects with optogenetics, and one possible reason is that not enough of the tissue is being inhibited,” he says. “This could potentially alleviate that.”

Restoring vision

Working with researchers at the Friedrich Miescher Institute for Biomedical Research in Switzerland, the MIT team also tested Jaws’s ability to restore the light sensitivity of retinal cells called cones. In people with a disease called retinitis pigmentosa, cones slowly atrophy, eventually causing blindness.

Friedrich Miescher Institute scientists Botond Roska and Volker Busskamp have previously shown that some vision can be restored in mice by engineering those cone cells to express light-sensitive proteins. In the new paper, Roska and Busskamp tested the Jaws protein in the mouse retina and found that it more closely resembled the eye’s natural opsins and offered a greater range of light sensitivity, making it potentially more useful for treating retinitis pigmentosa.

This type of noninvasive approach to optogenetics could also represent a step toward developing optogenetic treatments for diseases such as epilepsy, which could be controlled by shutting off misfiring neurons that cause seizures, Boyden says. “Since these molecules come from species other than humans, many studies must be done to evaluate their safety and efficacy in the context of treatment,” he says.

Boyden’s lab is working with many other research groups to further test the Jaws opsin for other applications. The team is also seeking new light-sensitive proteins and is working on high-throughput screening approaches that could speed up the development of such proteins.

The research at MIT was funded by Jerry and Marge Burnett, the Defense Advanced Research Projects Agency, the Human Frontiers Science Program, the IET A. F. Harvey Prize, the Janet and Sheldon Razin ’59 Fellowship of the MIT McGovern Institute, the New York Stem Cell Foundation-Robertson Investigator Award, the National Institutes of Health, the National Science Foundation, and the Wallace H. Coulter Foundation.

Controlling movement with light

For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.

Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.

In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.

Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.

Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.

“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.

Muscle control

Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.

To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.

When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.

“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”

“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.

Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.

“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

The research was funded by the Human Frontier Science Program and the National Science Foundation. Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience at MIT, is also an author of the paper.

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Air Force Office of Scientific Research, and the Packard Foundation.

Inside the adult ADHD brain

About 11 percent of school-age children in the United States have been diagnosed with attention deficit hyperactivity disorder (ADHD). While many of these children eventually “outgrow” the disorder, some carry their difficulties into adulthood: About 10 million American adults are currently diagnosed with ADHD.

In the first study to compare patterns of brain activity in adults who recovered from childhood ADHD and those who did not, MIT neuroscientists have discovered key differences in a brain communication network that is active when the brain is at wakeful rest and not focused on a particular task. The findings offer evidence of a biological basis for adult ADHD and should help to validate the criteria used to diagnose the disorder, according to the researchers.

Diagnoses of adult ADHD have risen dramatically in the past several years, with symptoms similar to those of childhood ADHD: a general inability to focus, reflected in difficulty completing tasks, listening to instructions, or remembering details.

“The psychiatric guidelines for whether a person’s ADHD is persistent or remitted are based on lots of clinical studies and impressions. This new study suggests that there is a real biological boundary between those two sets of patients,” says MIT’s John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and an author of the study, which appears in the June 10 issue of the journal Brain.

Shifting brain patterns

This study focused on 35 adults who were diagnosed with ADHD as children; 13 of them still have the disorder, while the rest have recovered. “This sample really gave us a unique opportunity to ask questions about whether or not the brain basis of ADHD is similar in the remitted-ADHD and persistent-ADHD cohorts,” says Aaron Mattfeld, a postdoc at MIT’s McGovern Institute for Brain Research and the paper’s lead author.

The researchers used a technique called resting-state functional magnetic resonance imaging (fMRI) to study what the brain is doing when a person is not engaged in any particular activity. These patterns reveal which parts of the brain communicate with each other during this type of wakeful rest.

“It’s a different way of using functional brain imaging to investigate brain networks,” says Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute and the senior author of the paper. “Here we have subjects just lying in the scanner. This method reveals the intrinsic functional architecture of the human brain without invoking any specific task.”

In people without ADHD, when the mind is unfocused, there is a distinctive synchrony of activity in brain regions known as the default mode network. Previous studies have shown that in children and adults with ADHD, two major hubs of this network — the posterior cingulate cortex and the medial prefrontal cortex — no longer synchronize.

In the new study, the MIT team showed for the first time that in adults who had been diagnosed with ADHD as children but no longer have it, this normal synchrony pattern is restored. “Their brains now look like those of people who never had ADHD,” Mattfeld says.

“This finding is quite intriguing,” says Francisco Xavier Castellanos, a professor of child and adolescent psychiatry at New York University who was not involved in the research. “If it can be confirmed, this pattern could become a target for potential modification to help patients learn to compensate for the disorder without changing their genetic makeup.”

Lingering problems

However, in another measure of brain synchrony, the researchers found much more similarity between both groups of ADHD patients.

In people without ADHD, when the default mode network is active, another network, called the task positive network, is suppressed. When the brain is performing tasks that require focus, the task positive network takes over and suppresses the default mode network. If this reciprocal relationship degrades, the ability to focus declines.

Both groups of adult ADHD patients, including those who had recovered, showed patterns of simultaneous activation of both networks. This is thought to be a sign of impairment in executive function — the management of cognitive tasks — that is separate from ADHD, but occurs in about half of ADHD patients. All of the ADHD patients in this study performed poorly on tests of executive function. “Once you have executive function problems, they seem to hang in there,” says Gabrieli, who is a member of the McGovern Institute.

The researchers now plan to investigate how ADHD medications influence the brain’s default mode network, in hopes that this might allow them to predict which drugs will work best for individual patients. Currently, about 60 percent of patients respond well to the first drug they receive.

“It’s unknown what’s different about the other 40 percent or so who don’t respond very much,” Gabrieli says. “We’re pretty excited about the possibility that some brain measurement would tell us which child or adult is most likely to benefit from a treatment.”

The research was funded by the Poitras Center for Affective Disorders Research at the McGovern Institute.

Illuminating neuron activity in 3-D

Researchers at MIT and the University of Vienna have created an imaging system that reveals neural activity throughout the brains of living animals. This technique, the first that can generate 3-D movies of entire brains at the millisecond timescale, could help scientists discover how neuronal networks process sensory information and generate behavior.

The team used the new system to simultaneously image the activity of every neuron in the worm Caenorhabditis elegans, as well as the entire brain of a zebrafish larva, offering a more complete picture of nervous system activity than has been previously possible.

“Looking at the activity of just one neuron in the brain doesn’t tell you how that information is being computed; for that, you need to know what upstream neurons are doing. And to understand what the activity of a given neuron means, you have to be able to see what downstream neurons are doing,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT and one of the leaders of the research team. “In short, if you want to understand how information is being integrated from sensation all the way to action, you have to see the entire brain.”

The new approach, described May 18 in Nature Methods, could also help neuroscientists learn more about the biological basis of brain disorders. “We don’t really know, for any brain disorder, the exact set of cells involved,” Boyden says. “The ability to survey activity throughout a nervous system may help pinpoint the cells or networks that are involved with a brain disorder, leading to new ideas for therapies.”

Boyden’s team developed the brain-mapping method with researchers in the lab of Alipasha Vaziri of the University of Vienna and the Research Institute of Molecular Pathology in Vienna. The paper’s lead authors are Young-Gyu Yoon, a graduate student at MIT, and Robert Prevedel, a postdoc at the University of Vienna.

High-speed 3-D imaging

Neurons encode information — sensory data, motor plans, emotional states, and thoughts — using electrical impulses called action potentials, which provoke calcium ions to stream into each cell as it fires. By engineering fluorescent proteins to glow when they bind calcium, scientists can visualize this electrical firing of neurons. However, until now there has been no way to image this neural activity over a large volume, in three dimensions, and at high speed.

Scanning the brain with a laser beam can produce 3-D images of neural activity, but it takes a long time to capture an image because each point must be scanned individually. The MIT team wanted to achieve similar 3-D imaging but accelerate the process so they could see neuronal firing, which takes only milliseconds, as it occurs.

The new method is based on a widely used technology known as light-field imaging, which creates 3-D images by measuring the angles of incoming rays of light. Ramesh Raskar, an associate professor of media arts and sciences at MIT and an author of this paper, has worked extensively on developing this type of 3-D imaging. Microscopes that perform light-field imaging have been developed previously by multiple groups. In the new paper, the MIT and Austrian researchers optimized the light-field microscope, and applied it, for the first time, to imaging neural activity.

With this kind of microscope, the light emitted by the sample being imaged is sent through an array of lenses that refracts the light in different directions. Each point of the sample generates about 400 different points of light, which can then be recombined using a computer algorithm to recreate the 3-D structure.

“If you have one light-emitting molecule in your sample, rather than just refocusing it into a single point on the camera the way regular microscopes do, these tiny lenses will project its light onto many points. From that, you can infer the three-dimensional position of where the molecule was,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research.

Prevedel built the microscope, and Yoon devised the computational strategies that reconstruct the 3-D images.

Aravinthan Samuel, a professor of physics at Harvard University, says this approach seems to be an “extremely promising” way to speed up 3-D imaging of living, moving animals, and to correlate their neuronal activity with their behavior. “What’s very impressive about it is that it is such an elegantly simple implementation,” says Samuel, who was not part of the research team. “I could imagine many labs adopting this.”

Neurons in action

The researchers used this technique to image neural activity in the worm C. elegans, the only organism for which the entire neural wiring diagram is known. This 1-millimeter worm has 302 neurons, each of which the researchers imaged as the worm performed natural behaviors, such as crawling. They also observed the neuronal response to sensory stimuli, such as smells.

The downside to light field microscopy, Boyden says, is that the resolution is not as good as that of techniques that slowly scan a sample. The current resolution is high enough to see activity of individual neurons, but the researchers are now working on improving it so the microscope could also be used to image parts of neurons, such as the long dendrites that branch out from neurons’ main bodies. They also hope to speed up the computing process, which currently takes a few minutes to analyze one second of imaging data.

The researchers also plan to combine this technique with optogenetics, which enables neuronal firing to be controlled by shining light on cells engineered to express light-sensitive proteins. By stimulating a neuron with light and observing the results elsewhere in the brain, scientists could determine which neurons are participating in particular tasks.

Other co-authors at MIT include Nikita Pak, a PhD student in mechanical engineering, and Gordon Wetzstein, a research scientist at the Media Lab. The work at MIT was funded by the Allen Institute for Brain Science; the National Institutes of Health; the MIT Synthetic Intelligence Project; the IET Harvey Prize; the National Science Foundation (NSF); the New York Stem Cell Foundation-Robertson Award; Google; the NSF Center for Brains, Minds, and Machines at MIT; and Jeremy and Joyce Wertheimer.