How the brain recognizes objects

When the eyes are open, visual information flows from the retina through the optic nerve and into the brain, which assembles this raw information into objects and scenes.

Scientists have previously hypothesized that objects are distinguished in the inferior temporal (IT) cortex, which is near the end of this flow of information, also called the ventral stream. A new study from MIT neuroscientists offers evidence that this is indeed the case.

Using data from both humans and nonhuman primates, the researchers found that neuron firing patterns in the IT cortex correlate strongly with success in object-recognition tasks.

“While we knew from prior work that neuronal population activity in inferior temporal cortex was likely to underlie visual object recognition, we did not have a predictive map that could accurately link that neural activity to object perception and behavior. The results from this study demonstrate that a particular map from particular aspects of IT population activity to behavior is highly accurate over all types of objects that were tested,” says James DiCarlo, head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and senior author of the study, which appears in the Journal of Neuroscience.

The paper’s lead author is Najib Majaj, a former postdoc in DiCarlo’s lab who is now at New York University. Other authors are former MIT graduate student Ha Hong and former MIT undergraduate Ethan Solomon.

Distinguishing objects

Earlier stops along the ventral stream are believed to process basic visual elements such as brightness and orientation. More complex functions take place farther along the stream, with object recognition believed to occur in the IT cortex.

To investigate this theory, the researchers first asked human subjects to perform 64 object-recognition tasks. Some of these tasks were “trivially easy,” Majaj says, such as distinguishing an apple from a car. Others — such as discriminating between two very similar faces — were so difficult that the subjects were correct only about 50 percent of the time.

After measuring human performance on these tasks, the researchers then showed the same set of nearly 6,000 images to nonhuman primates as they recorded electrical activity in neurons of the inferior temporal cortex and another visual region known as V4.

Each of the 168 IT neurons and 128 V4 neurons fired in response to some objects but not others, creating a firing pattern that served as a distinctive signature for each object. By comparing these signatures, the researchers could analyze whether they correlated to humans’ ability to distinguish between two objects.

The researchers found that the firing patterns of IT neurons, but not V4 neurons, perfectly predicted the human performances they had seen. That is, when humans had trouble distinguishing two objects, the neural signatures for those objects were so similar as to be indistinguishable, and for pairs where humans succeeded, the patterns were very different.

“On the easy stimuli, IT did as well as humans, and on the difficult stimuli, IT also failed,” Majaj says. “We had a nice correlation between behavior and neural responses.”

The findings support the hypothesis that patterns of neural activity in the IT cortex can encode object representations detailed enough to allow the brain to distinguish different objects, the researchers say.

Nikolaus Kriegeskorte, a principal investigator at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., agrees that the study offers “crucial evidence supporting the idea that inferior temporal cortex contains the neuronal representations underlying human visual object recognition.”

“This study is exemplary for its original and rigorous method of establishing links between brain representations and human behavioral performance,” adds Kriegeskorte, who was not part of the research team.

Model performance

The researchers also tested more than 10,000 other possible models for how the brain might encode object representations. These models varied based on location in the brain, the number of neurons required, and the time window for neural activity.

Some of these models, including some that relied on V4, were eliminated because they performed better than humans on some tasks and worse on others.

“We wanted the performance of the neurons to perfectly match the performance of the humans in terms of the pattern, so the easy tasks would be easy for the neural population and the hard tasks would be hard for the neural population,” Majaj says.

The research team now aims to gather even more data to ask if this model or similar models can predict the behavioral difficulty of object recognition on each and every visual image — an even higher bar than the one tested thus far. That might require additional factors to be included in the model that were not needed in this study, and thus could expose important gaps in scientists’ current understanding of neural representations of objects.

They also plan to expand the model so they can predict responses in IT based on input from earlier parts of the visual stream.

“We can start building a cascade of computational operations that take you from an image on the retina slowly through V1, V2, V4, until we’re able to predict the population in IT,” Majaj says.

How we make emotional decisions

Some decisions arouse far more anxiety than others. Among the most anxiety-provoking are those that involve options with both positive and negative elements, such choosing to take a higher-paying job in a city far from family and friends, versus choosing to stay put with less pay.

MIT researchers have now identified a neural circuit that appears to underlie decision-making in this type of situation, which is known as approach-avoidance conflict. The findings could help researchers to discover new ways to treat psychiatric disorders that feature impaired decision-making, such as depression, schizophrenia, and borderline personality disorder.

“In order to create a treatment for these types of disorders, we need to understand how the decision-making process is working,” says Alexander Friedman, a research scientist at MIT’s McGovern Institute for Brain Research and the lead author of a paper describing the findings in the May 28 issue of Cell.

Friedman and colleagues also demonstrated the first step toward developing possible therapies for these disorders: By manipulating this circuit in rodents, they were able to transform a preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

The paper’s senior author is Ann Graybiel, an MIT Institute Professor and member of the McGovern Institute. Other authors are postdoc Daigo Homma, research scientists Leif Gibb and Ken-ichi Amemori, undergraduates Samuel Rubin and Adam Hood, and technical assistant Michael Riad.

Making hard choices

The new study grew out of an effort to figure out the role of striosomes — clusters of cells distributed through the the striatum, a large brain region involved in coordinating movement and emotion and implicated in some human disorders. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

Previous studies from Graybiel’s lab identified regions of the brain’s prefrontal cortex that project to striosomes. These regions have been implicated in processing emotions, so the researchers suspected that this circuit might also be related to emotion.

To test this idea, the researchers studied mice as they performed five different types of behavioral tasks, including an approach-avoidance scenario. In that situation, rats running a maze had to choose between one option that included strong chocolate, which they like, and bright light, which they don’t, and an option with dimmer light but weaker chocolate.

When humans are forced to make these kinds of cost-benefit decisions, they usually experience anxiety, which influences the choices they make. “This type of task is potentially very relevant to anxiety disorders,” Gibb says. “If we could learn more about this circuitry, maybe we could help people with those disorders.”

The researchers also tested rats in four other scenarios in which the choices were easier and less fraught with anxiety.

“By comparing performance in these five tasks, we could look at cost-benefit decision-making versus other types of decision-making, allowing us to reach the conclusion that cost-benefit decision-making is unique,” Friedman says.

Using optogenetics, which allowed them to turn cortical input to the striosomes on or off by shining light on the cortical cells, the researchers found that the circuit connecting the cortex to the striosomes plays a causal role in influencing decisions in the approach-avoidance task, but none at all in other types of decision-making.

When the researchers shut off input to the striosomes from the cortex, they found that the rats began choosing the high-risk, high-reward option as much as 20 percent more often than they had previously chosen it. If the researchers stimulated input to the striosomes, the rats began choosing the high-cost, high-reward option less often.

Paul Glimcher, a professor of physiology and neuroscience at New York University, describes the study as a “masterpiece” and says he is particularly impressed by the use of a new technology, optogenetics, to solve a longstanding mystery. The study also opens up the possibility of studying striosome function in other types of decision-making, he adds.

“This cracks the 20-year puzzle that [Graybiel] wrote — what do the striosomes do?” says Glimcher, who was not part of the research team. “In 10 years we will have a much more complete picture, of which this paper is the foundational stone. She has demonstrated that we can answer this question, and answered it in one area. A lot of labs will now take this up and resolve it in other areas.”

Emotional gatekeeper

The findings suggest that the striatum, and the striosomes in particular, may act as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, the researchers say.

That gatekeeper circuit also appears to include a part of the midbrain called the substantia nigra, which has dopamine-containing cells that play an important role in motivation and movement. The researchers believe that when activated by input from the striosomes, these substantia nigra cells produce a long-term effect on an animal or human patient’s decision-making attitudes.

“We would so like to find a way to use these findings to relieve anxiety disorder, and other disorders in which mood and emotion are affected,” Graybiel says. “That kind of work has a real priority to it.”

In addition to pursuing possible treatments for anxiety disorders, the researchers are now trying to better understand the role of the dopamine-containing substantia nigra cells in this circuit, which plays a critical role in Parkinson’s disease and may also be involved in related disorders.

The research was funded by the National Institute of Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency, the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, and the William N. and Bernice E. Bumpus Foundation.

In one aspect of vision, computers catch up to primate brain

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.

Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

Because these networks are based on neuroscientists’ current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.

“The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain,” says DiCarlo, who is also a member of MIT’s McGovern Institute for Brain Research.

This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper’s lead author.

Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.

Inspired by the brain

Scientists began building neural networks in the 1970s in hopes of mimicking the brain’s ability to process visual information, recognize speech, and understand language.

For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.

“Each individual element is typically a very simple mathematical expression,” Cadieu says. “But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition.”

For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.

“Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart,” Cadieu says.

The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

More processing power

Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. “That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards,” Cadieu says.

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.

At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.

Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.

“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”

DiCarlo’s lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognizing three-dimensional forms. They also hope to create models that include the feedback projections seen in the human visual system. Current networks only model the “feedforward” projections from the retina to the IT cortex, but there are 10 times as many connections that go from IT cortex back to the rest of the system.

This work was supported by the National Eye Institute, the National Science Foundation, and the Defense Advanced Research Projects Agency.

Fifteen MIT scientists receive NIH BRAIN Initiative grants

Today, the National Institutes of Health (NIH) announced their first round of BRAIN Initiative award recipients. Six teams and 15 researchers from the Massachusetts Institute of Technology were recipients.

Mriganka Sur, principal investigator at the Picower Institute for Learning and Memory and the Paul E. Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) leads a team studying cortical circuits and information flow during memory-guided perceptual decisions. Co-principal investigators include Emery Brown, BCS professor of computational neuroscience and the Edward Hood Taplin Professor of Medical Engineering; Kwanghun Chung, Picower Institute principal investigator and assistant professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science (IMES); and Ian Wickersham, research scientist at the McGovern Institute for Brain Research and head of MIT’s Genetic Neuroengineering Group.

Elly Nedivi, Picower Institute principal investigator and professor in BCS and the Department of Biology, leads a team studying new methods for high-speed monitoring of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. Her co-principal investigator is Peter So, professor of mechanical and biological engineering, and director of the MIT Laser Biomedical Research Center.

Ian Wickersham will lead a team looking at novel technologies for nontoxic transsynaptic tracing. His co-principal investigators include Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience in BCS; Li-Huei Tsai, director of the Picower Institute and the Picower Professor of Neuroscience in BCS; and Kay Tye, Picower Institute principal investigator and assistant professor of neuroscience in BCS.

Robert Desimone will lead a team studying vascular interfaces for brain imaging and stimulation. Co-principal investigators include Ed Boyden, associate professor at the MIT Media Lab, McGovern Institute, and departments of BCS and Biological Engineering; head of MIT’s Synthetic Neurobiology Group, and co-director of MIT’s Center for Neurobiological Engineering; and Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology in IMES and director of the Harvard-MIT Biomedical Engineering Center. Collaborators on this project include: Rodolfo Llinas (New York University), George Church (Harvard University), Jan Rabaey (University of California at Berkeley), Pablo Blinder (Tel Aviv University), Eric Leuthardt (Washington University/St. Louis), Michel Maharbiz (Berkeley), Jose Carmena (Berkeley), Elad Alon (Berkeley), Colin Derdeyn (Washington University in St. Louis), Lowell Wood (Bill and Melinda Gates Foundation), Xue Han (Boston University), and Adam Marblestone (MIT).

Ed Boyden will be co-principal investigator with Mark Bathe, associate professor of biological engineering, and Peng Yin of Harvard on a project to study ultra-multiplexed nanoscale in situ proteomics for understanding synapse types.

Alan Jasanoff, associate professor of biological engineering and director of the MIT Center for Neurobiological Engineering, will lead a team looking at calcium sensors for molecular fMRI. Stephen Lippard, the Arthur Amos Noyes Professor of Chemistry, is co-principal investigator.

In addition, Sur and Wickersham also received BRAIN Early Concept Grants for Exploratory Research (EAGER) from the National Science Foundation (NSF). Sur will focus on massive-scale multi-area single neuron recordings to reveal circuits underlying short-term memory. Wickersham, in collaboration with Li-Huei Tsai, Kay Tye, and Robert Desimone, will develop cell-type specific optogenetics in wild-type animals. Additional information about NSF support of the BRAIN initiative can be found at NSF.gov/brain.

The BRAIN Initiative, spearheaded by President Obama in April 2013, challenges the nation’s leading scientists to advance our sophisticated understanding of the human mind and discover new ways to treat, prevent, and cure neurological disorders like Alzheimer’s, schizophrenia, autism, and traumatic brain injury. The scientific community is charged with accelerating the invention of cutting-edge technologies that can produce dynamic images of complex neural circuits and illuminate the interaction of lightning-fast brain cells. The new capabilities are expected to provide greater insights into how brain functionality is linked to behavior, learning, memory, and the underlying mechanisms of debilitating disease. BRAIN was launched with approximately $100 million in initial investments from the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency (DARPA).

BRAIN Initiative scientists are engaged in a challenging and transformative endeavor to explore how our minds instantaneously processes, store, and retrieve vast quantities of information. Their discoveries will unlock many of the remaining mysteries inherent in the brain’s billions of neurons and trillions of connections, leading to a deeper understanding of the underlying causes of many neurological and psychiatric conditions. Their findings will enable scientists and doctors to develop the groundbreaking arsenal of tools and technologies required to more effectively treat those suffering from these devastating disorders.

Controlling movement with light

For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.

Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.

In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.

Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.

Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.

“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.

Muscle control

Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.

To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.

When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.

“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”

“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.

Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.

“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

The research was funded by the Human Frontier Science Program and the National Science Foundation. Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience at MIT, is also an author of the paper.

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

Optogenetic toolkit goes multicolor

Optogenetics is a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins. Within the past decade, it has become a very powerful tool for discovering the functions of different types of cells in the brain.

Most of these light-sensitive proteins, known as opsins, respond to light in the blue-green range. Now, a team led by MIT has discovered an opsin that is sensitive to red light, which allows researchers to independently control the activity of two populations of neurons at once, enabling much more complex studies of brain function.

“If you want to see how two different sets of cells interact, or how two populations of the same cell compete against each other, you need to be able to activate those populations independently,” says Ed Boyden, a member of the McGovern Institute for Brain Research at MIT and a senior author of the new study.

The new opsin is one of about 60 light-sensitive proteins found in a screen of 120 species of algae. The study, which appears in the Feb. 9 online edition of Nature Methods, also yielded the fastest opsin, enabling researchers to study neuron activity patterns with millisecond timescale precision.

Boyden and Gane Ka-Shu Wong, a professor of medicine and biological sciences at the University of Alberta, are the paper’s senior authors, and the lead author is MIT postdoc Nathan Klapoetke. Researchers from the Howard Hughes Medical Institute’s Janelia Farm Research Campus, the University of Pennsylvania, the University of Cologne, and the Beijing Genomics Institute also contributed to the study.

In living color

Opsins occur naturally in many algae and bacteria, which use the light-sensitive proteins to help them respond to their environment and generate energy.

To achieve optical control of neurons, scientists engineer brain cells to express the gene for an opsin, which transports ions across the cell’s membrane to alter its voltage. Depending on the opsin used, shining light on the cell either lowers the voltage and silences neuron firing, or boosts voltage and provokes the cell to generate an electrical impulse. This effect is nearly instantaneous and easily reversible.

Using this approach, researchers can selectively turn a population of cells on or off and observe what happens in the brain. However, until now, they could activate only one population at a time, because the only opsins that responded to red light also responded to blue light, so they couldn’t be paired with other opsins to control two different cell populations.

To seek additional useful opsins, the MIT researchers worked with Wong’s team at the University of Alberta, which is sequencing the transcriptomes of 1,000 plants, including some algae. (The transcriptome is similar to the genome but includes only the genes that are expressed by a cell, not the entirety of its genetic material.)

Once the team obtained genetic sequences that appeared to code for opsins, Klapoetke tested their light-responsiveness in mammalian brain tissue, working with Martha Constantine-Paton, a professor of brain and cognitive sciences and of biology, a member of the McGovern Institute for Brain Research at MIT, and also an author of the paper. The red-light-sensitive opsin, which the researchers named Chrimson, can mediate neural activity in response to light with a 735-nanometer
wavelength.

The researchers also discovered a blue-light-driven opsin that has two highly desirable traits: It operates at high speed, and it is sensitive to very dim light. This opsin, called Chronos, can be stimulated with levels of blue light that are too weak to activate Chrimson.

“You can use short pulses of dim blue light to drive the blue one, and you can use strong red light to drive Chrimson, and that allows you to do true two-color, zero-cross-talk activation in intact brain tissue,” says Boyden, who is a member of MIT’s Media Lab and an associate professor of biological engineering and brain and cognitive sciences at MIT.

Researchers had previously tried to modify naturally occurring opsins to make them respond faster and react to dimmer light, but trying to optimize one feature often made other features worse.

“It was apparent that when trying to engineer traits like color, light sensitivity, and kinetics, there are always tradeoffs,” Klapoetke says. “We’re very lucky that something natural actually was more than several times faster and also five or six times more light-sensitive than anything else.”

Selective control

These new opsins lend themselves to several types of studies that were not possible before, Boyden says. For one, scientists could not only manipulate activity of a cell population of interest, but also control upstream cells that influence the target population by secreting neurotransmitters.

Pairing Chrimson and Chronos could also allow scientists to study the functions of different types of cells in the same microcircuit within the brain. Such cells are usually located very close together, but with the new opsins they can be controlled independently with two different colors of light.

“I think the tools described in this excellent paper represent a major advance for both basic and translational neuroscience,” says Botond Roska, a senior group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not part of the research team. “Optogenetic tools that are shifted towards the infrared range, such as Chrimson described in this paper, are much better than the more blue-shifted variants since these are less toxic, activate less the pupillary reflex, and activate less the remaining photoreceptors of patients.”

Most optogenetic studies thus far have been done in mice, but Chrimson could be used for optogenetic studies of fruit flies, a commonly used experimental organism. Researchers have had trouble using blue-light-sensitive opsins in fruit flies because the light can get into the flies’ eyes and startle them, interfering with the behavior being studied.

Vivek Jayaraman, a research group leader at Janelia Farms and an author of the paper, was able to show that this startle response does not occur when red light is used to stimulate Chrimson in fruit flies.

Because red light is less damaging to tissue than blue light, Chrimson also holds potential for eventual therapeutic use in humans, Boyden says. Animal studies with other opsins have shown promise in helping to restore vision after the loss of photoreceptor cells in the retina.

The researchers are now trying to modify Chrimson to respond to light in the infrared range. They are also working on making both Chrimson and Chronos faster and more light sensitive.

MIT’s portion of the project was funded by the National Institutes of Health, the MIT Media Lab, the National Science Foundation, the Wallace H. Coulter Foundation, the Alfred P. Sloan Foundation, a NARSAD Young Investigator Grant, the Human Frontiers Science Program, an NYSCF Robertson Neuroscience Investigator Award, the IET A.F. Harvey Prize, Janet and Sheldon Razin ’59, and the Skolkovo Institute of Science and Technology.

Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

When and where

Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.

Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.

To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.

In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and manmade objects. Each image was shown for half a second.

“We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula A. Martinos Imaging Center at the McGovern Institute.

Millisecond by millisecond

By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.

About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.

The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge University.

“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.

The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.

Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal, or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.

“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”

The research was funded by the National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation.

Brain balances learning new skills, retaining old skills

To learn new motor skills, the brain must be plastic: able to rapidly change the strengths of connections between neurons, forming new patterns that accomplish a particular task. However, if the brain were too plastic, previously learned skills would be lost too easily.

A new computational model developed by MIT neuroscientists explains how the brain maintains the balance between plasticity and stability, and how it can learn very similar tasks without interference between them.

The key, the researchers say, is that neurons are constantly changing their connections with other neurons. However, not all of the changes are functionally relevant — they simply allow the brain to explore many possible ways to execute a certain skill, such as a new tennis stroke.

“Your brain is always trying to find the configurations that balance everything so you can do two tasks, or three tasks, or however many you’re learning,” says Robert Ajemian, a research scientist in MIT’s McGovern Institute for Brain Research and lead author of a paper describing the findings in the Proceedings of the National Academy of Sciences the week of Dec. 9. “There are many ways to solve a task, and you’re exploring all the different ways.”

As the brain explores different solutions, neurons can become specialized for specific tasks, according to this theory.

Noisy circuits

As the brain learns a new motor skill, neurons form circuits that can produce the desired output — a command that will activate the body’s muscles to perform a task such as swinging a tennis racket. Perfection is usually not achieved on the first try, so feedback from each effort helps the brain to find better solutions.

This works well for learning one skill, but complications arise when the brain is trying to learn many different skills at once. Because the same distributed network controls related motor tasks, new modifications to existing patterns can interfere with previously learned skills.

“This is particularly tricky when you’re learning very similar things,” such as two different tennis strokes, says Institute Professor Emilio Bizzi, the paper’s senior author and a member of the McGovern Institute.

The Bizzi lab shows how the brain utilizes the operating characteristics of neurons to form sensorimotor memories in a way that differs profoundly from computer memory.
The Bizzi lab shows how the brain utilizes the operating characteristics of neurons to form sensorimotor memories in a way that differs profoundly from computer memory.

In a serial network such as a computer chip, this would be no problem — instructions for each task would be stored in a different location on the chip. However, the brain is not organized like a computer chip. Instead, it is massively parallel and highly connected — each neuron connects to, on average, about 10,000 other neurons.

That connectivity offers an advantage, however, because it allows the brain to test out so many possible solutions to achieve combinations of tasks. The constant changes in these connections, which the researchers call hyperplasticity, is balanced by another inherent trait of neurons — they have a very low signal to noise ratio, meaning that they receive about as much useless information as useful input from their neighbors.

Most models of neural activity don’t include noise, but the MIT team says noise is a critical element of the brain’s learning ability. “Most people don’t want to deal with noise because it’s a nuisance,” Ajemian says. “We set out to try to determine if noise can be used in a beneficial way, and we found that it allows the brain to explore many solutions, but it can only be utilized if the network is hyperplastic.”

This model helps to explain how the brain can learn new things without unlearning previously acquired skills, says Ferdinando Mussa-Ivaldi, a professor of physiology at Northwestern University.

“What the paper shows is that, counterintuitively, if you have neural networks and they have a high level of random noise, that actually helps instead of hindering the stability problem,” says Mussa-Ivaldi, who was not part of the research team.

Without noise, the brain’s hyperplasticity would overwrite existing memories too easily. Conversely, low plasticity would not allow any new skills to be learned, because the tiny changes in connectivity would be drowned out by all of the inherent noise.

The model is supported by anatomical evidence showing that neurons exhibit a great deal of plasticity even when learning is not taking place, as measured by the growth and formation of connections of dendrites — the tiny extensions that neurons use to communicate with each other.

Like riding a bike

The constantly changing connections explain why skills can be forgotten unless they are practiced often, especially if they overlap with other routinely performed tasks.

“That’s why an expert tennis player has to warm up for an hour before a match,” Ajemian says. The warm-up is not for their muscles, instead, the players need to recalibrate the neural networks that control different tennis strokes that are stored in the brain’s motor cortex.

However, skills such as riding a bicycle, which is not very similar to other common skills, are retained more easily. “Once you’ve learned something, if it doesn’t overlap or intersect with other skills, you will forget it but so slowly that it’s essentially permanent,” Ajemian says.

The researchers are now investigating whether this type of model could also explain how the brain forms memories of events, as well as motor skills.

The research was funded by the National Science Foundation.

Are we there yet?

“Are we there yet?”

As anyone who has traveled with young children knows, maintaining focus on distant goals can be a challenge. A new study from MIT suggests how the brain achieves this task, and indicates that the neurotransmitter dopamine may signal the value of long-term rewards. The findings may also explain why patients with Parkinson’s disease — in which dopamine signaling is impaired — often have difficulty in sustaining motivation to finish tasks.

The work is described this week in the journal Nature.

Previous studies have linked dopamine to rewards, and have shown that dopamine neurons show brief bursts of activity when animals receive an unexpected reward. These dopamine signals are believed to be important for reinforcement learning, the process by which an animal learns to perform actions that lead to reward.

Taking the long view

In most studies, that reward has been delivered within a few seconds. In real life, though, gratification is not always immediate: Animals must often travel in search of food, and must maintain motivation for a distant goal while also responding to more immediate cues. The same is true for humans: A driver on a long road trip must remain focused on reaching a final destination while also reacting to traffic, stopping for snacks, and entertaining children in the back seat.

The MIT team, led by Institute Professor Ann Graybiel — who is also an investigator at MIT’s McGovern Institute for Brain Research — decided to study how dopamine changes during a maze task approximating work for delayed gratification. The researchers trained rats to navigate a maze to reach a reward. During each trial a rat would hear a tone instructing it to turn either right or left at an intersection to find a chocolate milk reward.

Rather than simply measuring the activity of dopamine-containing neurons, the MIT researchers wanted to measure how much dopamine was released in the striatum, a brain structure known to be important in reinforcement learning. They teamed up with Paul Phillips of the University of Washington, who has developed a technology called fast-scan cyclic voltammetry (FSCV) in which tiny, implanted, carbon-fiber electrodes allow continuous measurements of dopamine concentration based on its electrochemical fingerprint.

“We adapted the FSCV method so that we could measure dopamine at up to four different sites in the brain simultaneously, as animals moved freely through the maze,” explains first author Mark Howe, a former graduate student with Graybiel who is now a postdoc in the Department of Neurobiology at Northwestern University. “Each probe measures the concentration of extracellular dopamine within a tiny volume of brain tissue, and probably reflects the activity of thousands of nerve terminals.”

Gradual increase in dopamine

From previous work, the researchers expected that they might see pulses of dopamine released at different times in the trial, “but in fact we found something much more surprising,” Graybiel says: The level of dopamine increased steadily throughout each trial, peaking as the animal approached its goal — as if in anticipation of a reward.

The rats’ behavior varied from trial to trial — some runs were faster than others, and sometimes the animals would stop briefly — but the dopamine signal did not vary with running speed or trial duration. Nor did it depend on the probability of getting a reward, something that had been suggested by previous studies.

“Instead, the dopamine signal seems to reflect how far away the rat is from its goal,” Graybiel explains. “The closer it gets, the stronger the signal becomes.” The researchers also found that the size of the signal was related to the size of the expected reward: When rats were trained to anticipate a larger gulp of chocolate milk, the dopamine signal rose more steeply to a higher final concentration.

In some trials the T-shaped maze was extended to a more complex shape, requiring animals to run further and to make extra turns before reaching a reward. During these trials, the dopamine signal ramped up more gradually, eventually reaching the same level as in the shorter maze. “It’s as if the animal were adjusting its expectations, knowing that it had further to go,” Graybiel says.

The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.
The traces represent brain activity in rats as they navigate through different mazes to receive a chocolate milk reward.

An ‘internal guidance system’

“This means that dopamine levels could be used to help an animal make choices on the way to the goal and to estimate the distance to the goal,” says Terrence Sejnowski of the Salk Institute, a computational neuroscientist who is familiar with the findings but who was not involved with the study. “This ‘internal guidance system’ could also be useful for humans, who also have to make choices along the way to what may be a distant goal.”

One question that Graybiel hopes to examine in future research is how the signal arises within the brain. Rats and other animals form cognitive maps of their spatial environment, with so-called “place cells” that are active when the animal is in a specific location. “As our rats run the maze repeatedly,” she says, “we suspect they learn to associate each point in the maze with its distance from the reward that they experienced on previous runs.”

As for the relevance of this research to humans, Graybiel says, “I’d be shocked if something similar were not happening in our own brains.” It’s known that Parkinson’s patients, in whom dopamine signaling is impaired, often appear to be apathetic, and have difficulty in sustaining motivation to complete a long task. “Maybe that’s because they can’t produce this slow ramping dopamine signal,” Graybiel says.

Patrick Tierney at MIT and Stefan Sandberg at the University of Washington also contributed to the study, which was funded by the National Institutes of Health, the National Parkinson Foundation, the CHDI Foundation, the Sydney family and Mark Gorenberg.