Category: Uncategorized
2015 Sharp Lecture in Neural Circuits: Dr. Cornelia Bargmann
Tasting light
Human taste receptors are specialized to distinguish several distinct compounds: sugars taste sweet, salts taste salty, and acidic compounds taste sour. Now a new study from MIT finds that the worm Caenorhabditis elegans has taken its powers of detection a step further: The worm can taste hydrogen peroxide, triggering it to stop eating the potentially dangerous substance.
Being able to taste hydrogen peroxide allows the worm to detect light, which generates hydrogen peroxide and other harmful reactive oxygen compounds both within the worm and in its environment.
“This is potentially a brand-new mechanism of sensing light,” says Nikhil Bhatla, the lead author of the paper and a postdoc in MIT’s Department of Biology. “All of the mechanisms of light detection we know about involve a chromophore — a small molecule that absorbs a photon and changes shape or transfers electrons. This seems to be the first example of behavioral light-sensing that requires the generation of a chemical in the process of detecting the light.”
Bhatla and Robert Horvitz, the David H. Koch Professor of Biology, describe the new hydrogen peroxide taste receptors in the Jan. 29 online issue of the journal Neuron.
Though it is not yet known whether there is a human equivalent of this system, the researchers say their discovery lends support to the idea that there may be human taste receptors dedicated to flavors other than the five canonical ones — sweet, salty, bitter, sour, and savory. It also opens the possibility that humans might be able to sense light in ways that are fundamentally different from those known to act in vision.
“I think we have underestimated our biological abilities,” Bhatla says. “Aside from those five, there are other flavors, such as burnt. How do we taste something as burnt? Or what about spicy, or metallic, or smoky? There’s this whole new area that hasn’t really been explored.”
Beyond bitter and sweet
One of the major functions of the sense of taste is to determine whether something is safe, or advantageous, to eat. For humans and other animals, bitterness often serves as a warning of poison, while sweetness can help to identify foods that are rich in energy.
For worms, hydrogen peroxide can be harmful because it can cause extensive cellular trauma, including damaging proteins, DNA, and other molecules in the body. In fact, certain strains of bacteria produce hydrogen peroxide that can kill C. elegans after being eaten. Worms might also ingest hydrogen peroxide from the soil where they live.
Bhatla and Horvitz found that worms stop eating both when they taste hydrogen peroxide and when light shines on them — especially high-energy light, such as violet or ultraviolet. The authors found the exact same feeding response when worms were exposed to either hydrogen peroxide or light, which suggested to them that the same mechanism might be controlling responses to both stimuli.
Worms are known to be averse to light: Previous research by others has shown that they flee when light shines on them. Bhatla and Horvitz have now found that this escape response, like the feeding response to light, is likely caused by light’s generation of chemicals such as hydrogen peroxide.
The C. elegans worm has a very simple and thoroughly mapped nervous system consisting of 302 neurons, 20 of which are located in the pharynx, the feeding organ that ingests and grinds food. Bhatla found that one pair of pharyngeal neurons, known as the I2 neurons, controls the animal’s response to both light and hydrogen peroxide. A particular molecular receptor in that neuron, gustatory receptor 3 (GUR-3), and a molecularly similar receptor found in other neurons (LITE-1) are critical to the response. However, each receptor appears to function in a slightly different way.
GUR-3 detects hydrogen peroxide, whether it is found naturally in the environment or generated by light. There are many GUR-3 receptors in the I2 neuron, and through a mechanism that remains unknown, hydrogen peroxide stimulation of GUR-3 causes the pharynx to stop grinding. Another molecule called peroxiredoxin, an antioxidant, appears to help GUR-3 detect hydrogen peroxide.
While the GUR-3 receptor responds much more strongly to hydrogen peroxide than to light, the LITE-1 receptor is much more sensitive to light than to hydrogen peroxide. LITE-1 has previously been implicated in detecting light, but until now, it has been a mystery how a taste receptor could respond to light. The new study suggests that like GUR-3, LITE-1 indirectly senses light by detecting reactive oxygen compounds generated by light — including, but not limited to, hydrogen peroxide.
Kenneth Miller of the Oklahoma Medical Research Foundation published a paper in 2008 describing LITE-1 and hypothesizing that it might work by detecting a chemical product of light interaction. “This paper goes one step beyond that and identifies molecules that LITE-1 could be sensing to identify the presence of light,” says Miller, who was not part of the new study. “I thought it was a fascinating look at the complex gustatory sensory mechanism for molecules like hydrogen peroxide.”
Not found in humans
The molecular family of receptors that includes GUR-3 and LITE-1 is specific to invertebrates, and is not found in humans. However, peroxiredoxin is found in humans, particularly in the eye, so the researchers suspect that peroxiredoxin might play a role in detecting reactive oxygen species generated by light in the eye.
The researchers are now trying to figure out the exact mechanism of hydrogen peroxide detection: For example, how exactly do these gustatory receptors detect reactive oxygen compounds? The researchers are also working to identify the neural circuit diagram that defines how the I2 neurons interact with other neurons to control the worms’ feeding behavior. Such neural circuit diagrams should provide insight into how the brains of worms, and people, generate behavior.
The research was funded by the National Science Foundation, the National Institutes of Health, and the Howard Hughes Medical Institute.
MIT team enlarges brain samples, making them easier to image
Beginning with the invention of the first microscope in the late 1500s, scientists have been trying to peer into preserved cells and tissues with ever-greater magnification. The latest generation of so-called “super-resolution” microscopes can see inside cells with resolution better than 250 nanometers.
A team of researchers from MIT has now taken a novel approach to gaining such high-resolution images: Instead of making their microscopes more powerful, they have discovered a method that enlarges tissue samples by embedding them in a polymer that swells when water is added. This allows specimens to be physically magnified, and then imaged at a much higher resolution.
This technique, which uses inexpensive, commercially available chemicals and microscopes commonly found in research labs, should give many more scientists access to super-resolution imaging, the researchers say.
“Instead of acquiring a new microscope to take images with nanoscale resolution, you can take the images on a regular microscope. You physically make the sample bigger, rather than trying to magnify the rays of light that are emitted by the sample,” says Ed Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT.
Boyden is the senior author of a paper describing the new method in the Jan. 15 online edition of Science. Lead authors of the paper are graduate students Fei Chen and Paul Tillberg.
Physical magnification
Most microscopes work by using lenses to focus light emitted from a sample into a magnified image. However, this approach has a fundamental limit known as the diffraction limit, which means that it can’t be used to visualize objects much smaller than the wavelength of the light being used. For example, if you are using blue-green light with a wavelength of 500 nanometers, you can’t see anything smaller than 250 nanometers.
“Unfortunately, in biology that’s right where things get interesting,” says Boyden, who is a member of MIT’s Media Lab and McGovern Institute for Brain Research. Protein complexes, molecules that transport payloads in and out of cells, and other cellular activities are all organized at the nanoscale.
Scientists have come up with some “really clever tricks” to overcome this limitation, Boyden says. However, these super-resolution techniques work best with small, thin samples, and take a long time to image large samples. “If you want to map the brain, or understand how cancer cells are organized in a metastasizing tumor, or how immune cells are configured in an autoimmune attack, you have to look at a large piece of tissue with nanoscale precision,” he says.
To achieve this, the MIT team focused its attention on the sample rather than the microscope. Their idea was to make specimens easier to image at high resolution by embedding them in an expandable polymer gel made of polyacrylate, a very absorbent material commonly found in diapers.
Before enlarging the tissue, the researchers first label the cell components or proteins that they want to examine, using an antibody that binds to the chosen targets. This antibody is linked to a fluorescent dye, as well as a chemical anchor that can attach the dye to the polyacrylate chain.
Once the tissue is labeled, the researchers add the precursor to the polyacrylate gel and heat it to form the gel. They then digest the proteins that hold the specimen together, allowing it to expand uniformly. The specimen is then washed in salt-free water to induce a 100-fold expansion in volume. Even though the proteins have been broken apart, the original location of each fluorescent label stays the same relative to the overall structure of the tissue because it is anchored to the polyacrylate gel.
“What you’re left with is a three-dimensional, fluorescent cast of the original material. And the cast itself is swollen, unimpeded by the original biological structure,” Tillberg says.
The MIT team imaged this “cast” with commercially available confocal microscopes, commonly used for fluorescent imaging but usually limited to a resolution of hundreds of nanometers. With their enlarged samples, the researchers achieved resolution down to 70 nanometers. “The expansion microscopy process … should be compatible with many existing microscope designs and systems already in laboratories,” Chen adds.
Large tissue samples
Using this technique, the MIT team was able to image a section of brain tissue 500 by 200 by 100 microns with a standard confocal microscope. Imaging such large samples would not be feasible with other super-resolution techniques, which require minutes to image a tissue slice only 1 micron thick and are limited in their ability to image large samples by optical scattering and other aberrations.
“The exciting part is that this approach can acquire data at the same high speed per pixel as conventional microscopy, contrary to most other methods that beat the diffraction limit for microscopy, which can be 1,000 times slower per pixel,” says George Church, a professor of genetics at Harvard Medical School who was not part of the research team.
“The other methods currently have better resolution, but are harder to use, or slower,” Tillberg says. “The benefits of our method are the ease of use and, more importantly, compatibility with large volumes, which is challenging with existing technologies.”
The researchers envision that this technology could be very useful to scientists trying to image brain cells and map how they connect to each other across large regions.
“There are lots of biological questions where you have to understand a large structure,” Boyden says. “Especially for the brain, you have to be able to image a large volume of tissue, but also to see where all the nanoscale components are.”
While Boyden’s team is focused on the brain, other possible applications for this technique include studying tumor metastasis and angiogenesis (growth of blood vessels to nourish a tumor), or visualizing how immune cells attack specific organs during autoimmune disease.
The research was funded by the National Institutes of Health, the New York Stem Cell Foundation, Jeremy and Joyce Wertheimer, the National Science Foundation, and the Fannie and John Hertz Foundation.
Flythrough animation of the mouse brain
Flythrough of image data collected from mouse hippocampus, with neurons expressing Yellow Fluorescent Protein, showing both the large volume accessible with Expansion Microscopy (ExM) and the sub-diffraction limited resolution needed to reveal synaptic structure. Animation by Sputnik Animation based on data from Ed Boyden Lab at MIT.
Translational Neuroscience Seminar Series
Seminar: Expansion Microscopy
In one aspect of vision, computers catch up to primate brain
For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.
Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.
Because these networks are based on neuroscientists’ current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.
“The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain,” says DiCarlo, who is also a member of MIT’s McGovern Institute for Brain Research.
This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper’s lead author.
Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.
Inspired by the brain
Scientists began building neural networks in the 1970s in hopes of mimicking the brain’s ability to process visual information, recognize speech, and understand language.
For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.
To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.
“Each individual element is typically a very simple mathematical expression,” Cadieu says. “But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition.”
For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.
The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.
“Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart,” Cadieu says.
The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.
More processing power
Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. “That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards,” Cadieu says.
The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.
At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.
Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.
“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”
DiCarlo’s lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognizing three-dimensional forms. They also hope to create models that include the feedback projections seen in the human visual system. Current networks only model the “feedforward” projections from the retina to the IT cortex, but there are 10 times as many connections that go from IT cortex back to the rest of the system.
This work was supported by the National Eye Institute, the National Science Foundation, and the Defense Advanced Research Projects Agency.
New way to turn genes on
Using a gene-editing system originally developed to delete specific genes, MIT researchers have now shown that they can reliably turn on any gene of their choosing in living cells.
This new application for the CRISPR/Cas9 gene-editing system should allow scientists to more easily determine the function of individual genes, according to Feng Zhang, the W.M. Keck Career Development Professor in Biomedical Engineering in MIT’s Departments of Brain and Cognitive Sciences and Biological Engineering, and a member of the Broad Institute and MIT’s McGovern Institute for Brain Research.
This approach also enables rapid functional screens of the entire genome, allowing scientists to identify genes involved in particular diseases. In a study published in the Dec. 10 online edition of Nature, Zhang and colleagues identified several genes that help melanoma cells become resistant to a cancer drug.
Silvana Konermann, a graduate student in Zhang’s lab, and Mark Brigham, a McGovern Institute postdoc, are the paper’s lead authors.
A new function for CRISPR
The CRISPR system relies on cellular machinery that bacteria use to defend themselves from viral infection. Researchers have previously harnessed this cellular system to create gene-editing complexes that include a DNA-cutting enzyme called Cas9 bound to a short RNA guide strand that is programmed to bind to a specific genome sequence, telling Cas9 where to make its cut.
In the past two years, scientists have developed Cas9 as a tool for turning genes off or replacing them with a different version. In the new study, Zhang and colleagues engineered the Cas9 system to turn genes on, rather than knock them out. Scientists have tried to do this before using proteins that are individually engineered to target DNA at specific sites. However, these proteins are difficult to work with. “If you use the older generation of tools, getting the technology to do what you actually want is a project on its own,” Konermann says. “It takes a lot of time and is also quite expensive.”
There have also been attempts to use CRISPR to turn on genes by inactivating the part of the Cas9 enzyme that cuts DNA and linking Cas9 to pieces of proteins called activation domains. These domains recruit the cellular machinery necessary to begin reading copying RNA from DNA, a process known as transcription.
However, these efforts have been unable to consistently turn on gene transcription. Zhang and his colleagues, Osamu Nureki and Hiroshi Nishimasu at the University of Tokyo, decided to overhaul the CRISPR-Cas9 system based on an analysis they published earlier this year of the structure formed when Cas9 binds to the guide RNA and its target DNA. “Based on knowing its 3-D shape, we can think about how to rationally improve the system,” Zhang says.
In previous efforts, scientists had tried to attach the activation domains to either end of the Cas9 protein, with limited success. From their structural studies, the MIT team realized that two small loops of the RNA guide poke out from the Cas9 complex and could be better points of attachment because they allow the activation domains to have more flexibility in recruiting transcription machinery.
Using their revamped system, the researchers activated about a dozen genes that had proven difficult or impossible to turn on using the previous generation of Cas9 activators. Each gene showed at least a twofold boost in transcription, and for many genes, the researchers found multiple orders of magnitude increase in activation.
Genome-scale activation screening
Once the researchers had shown that the system was effective at activating genes, they created a library of 70,290 guide RNAs targeting all of the more than 20,000 genes in the human genome.
They screened this library to identify genes that confer resistance to a melanoma drug called PLX-4720. Drugs of this type work well in patients whose melanoma cells have a mutation in the BRAF gene, but cancer cells that survive the treatment can grow into new tumors, allowing the cancer to recur.
To discover the genes that help cells become resistant, the researchers delivered CRISPR components to a large population of melanoma cells grown in the lab, with each cell receiving a different guide RNA targeting a different gene. After treating the cells with PLX-4720, they identified several genes that helped the cells to survive — some previously known to be involved in drug resistance, as well as several novel targets.
Studies like this could help researchers discover new cancer drugs that prevent tumors from becoming resistant.
“You could start with a drug that targets the mutated BRAF along with combination therapy that targets genes that allow the cell to survive. If you target both of them at the same time, you could likely prevent the cells from developing resistance mechanisms that enable further growth despite drug treatment,” Konermann says.
Scientists have tried to do large-scale screens like this by delivering single genes carried by viruses, but that does not work with all genes.
“This new technique could allow you to sample a larger spectrum of genes that might be playing a role,” says Levi Garraway, an associate professor of medicine at Dana-Farber Cancer Institute who was not involved in the research. “This is really a technology development paper, but the tantalizing results from the drug resistance screen speak to the rich biological possibilities of this approach.”
Zhang’s lab also plans to use this technique to screen for genes that, when activated, could correct the effects of autism or neurodegenerative diseases such as Alzheimer’s. He also plans to make the necessary reagents available to academic labs that want to use them, through the Addgene repository.
The research was funded by the National Institute of Mental Health; the National Institute of Neurological Disorders and Stroke; the Keck, Searle Scholars, Klingenstein, Vallee, and Simons foundations; and Bob Metcalfe.
Live Long and Prosper in the New Year!
Season’s Greetings from your friends at the McGovern Institute for Brain Research at MIT.