To locate objects, brain relies on memory

Imagine you are looking for your wallet on a cluttered desk. As you scan the area, you hold in your mind a mental picture of what your wallet looks like.

MIT neuroscientists have now identified a brain region that stores this type of visual representation during a search. The researchers also found that this region sends signals to the parts of the brain that control eye movements, telling individuals where to look next.

This region, known as the ventral pre-arcuate (VPA), is critical for what the researchers call “feature attention,” which allows the brain to seek objects based on their specific properties. Most previous studies of how the brain pays attention have investigated a different type of attention known as spatial attention — that is, what happens when the brain focuses on a certain location.

“The way that people go about their lives most of the time, they don’t know where things are in advance. They’re paying attention to things based on their features,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “In the morning you’re trying to find your car keys so you can go to work. How do you do that? You don’t look at every pixel in your house. You have to use your knowledge of what your car keys look like.”

Desimone, also the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences, is the senior author of a paper describing the findings in the Oct. 29 online edition of Neuron. The paper’s lead author is Narcisse Bichot, a research scientist at the McGovern Institute. Other authors are Matthew Heard, a former research technician, and Ellen DeGennaro, a graduate student in the Harvard-MIT Division of Health Sciences and Technology.

Visual targets

The researchers focused on the VPA in part because of its extensive connections with the brain’s frontal eye fields, which control eye movements. Located in the prefrontal cortex, the VPA has previously been linked with working memory — a cognitive ability that helps us to gather and coordinate information while performing tasks such as solving a math problem or participating in a conversation.

“There have been a lot of studies showing that this region of the cortex is heavily involved in working memory,” Bichot says. “If you have to remember something, cells in these areas are involved in holding the memory of that object for the purpose of identifying it later.”

In the new study, the researchers found that the VPA also holds what they call an “attentional template” — that is, a memory of the item being sought.

In this study, the researchers first showed monkeys a target object, such as a human face, a banana, or a butterfly. After a delay, they showed an array of objects that included the target. When the animal fixed its gaze on the target object, it received a reward. “The animals can look around as long as they want until they find what they’re looking for,” Bichot says.

As the animals performed the task, the researchers recorded electrical activity from neurons in the VPA. Each object produced a distinctive pattern of neural activity, and the neurons that encoded a representation of the target object stayed active until a match was found, prompting the neurons to fire even more.

“When the target object finally enters their receptive fields, they give enhanced responses,” Desimone says. “That’s the signal that the thing they’re looking for is actually there.”

About 20 to 30 milliseconds after the VPA cells respond to the target object, they send a signal to the frontal eye fields, which direct the eyes to lock onto the target.

When the researchers blocked VPA activity, they found that although the animals could still move their eyes around in search of the target object, they could not find it. “Presumably it’s because they’ve lost this mechanism for telling them where the likely target is,” Desimone says.

Focused attention

The researchers believe the VPA may be the equivalent in nonhuman primates of a human brain region called the inferior frontal junction (IFJ). Last year Desimone and postdoc Daniel Baldauf found that the IFJ holds onto the idea of a target object — in that study, either faces or houses — and then directs the correct part of the brain to look for the target.

The researchers are now studying how the VPA interacts with a nearby region called the VPS, which appears to be more important for tasks in which attention must be switched quickly from one object to another. They are also performing additional studies of human attention, in hopes of learning more about disorders such as Attention Deficit Hyperactivity Disorder and other attention disorders.

“There’s really an opportunity there to understand something important about the role of the prefrontal cortex in both normal behavior and in brain disorders,” Desimone says.

How the brain keeps time

Keeping track of time is critical for many tasks, such as playing the piano, swinging a tennis racket, or holding a conversation. Neuroscientists at MIT and Columbia University have now figured out how neurons in one part of the brain measure time intervals and accurately reproduce them.

The researchers found the lateral intraparietal cortex (LIP), which plays a role in sensorimotor function, represents elapsed time, as animals measure and then reproduce a time interval. They also demonstrated how the firing patterns of population of neurons in the LIP could coordinate sensory and motor aspects of timing.

LIP is likely just one node in a circuit that measures time, says Mehrdad Jazayeri, the lead author of a paper describing the work in the Oct. 8 issue of Current Biology.

“I would not conclude that the parietal cortex is the timer,” says Jazayeri, an assistant professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research. “What we are doing is discovering computational principles that explain how neurons’ firing rates evolve with time, and how that relates to the animals’ behavior in single trials. We can explain mathematically what’s going on.”

The paper’s senior author is Michael Shadlen, a professor of neuroscience and member of the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University.

As time goes by

Jazayeri, who joined the MIT faculty in 2013, began studying timing in the brain several years ago while a postdoc at the University of Washington. He began by testing humans’ ability to measure and reproduce time using a task called “ready, set, go.” In this experiment, the subject measures the time between two flashes (“ready” and “set”) and then presses a button (“go”) at the appropriate time — that is, after the same amount of time that separated the “ready” and “set.”

From these studies, he discovered that people do not simply measure an interval and then reproduce it. Rather, after measuring an interval they combine that measurement, which is imprecise, with their prior knowledge of what the interval could have been. This prior knowledge, which builds up as they repeat the task many times, allows people to reproduce the interval more accurately.

“When people reproduce time, they don’t seem to use a timer,” Jazayeri says. “It’s an active act of probabilistic inference that goes on.”

To find out what happens in the brain during this process, Jazayeri recorded neuronal activity in the LIP of monkeys trained to perform the same task. In these recordings, he found distinctive patterns in the measurement phase (the interval between “ready” and “set”), and the production phase (the interval between “set” and “go”).

During the measurement phase, neuron activity increases, but not linearly. Instead, the slope of activity begins as a steep curve that gradually flattens out as time goes by, until the “set” signal is given. This is key because the slope at the end of the measurement interval predicts the slope of activity in the production phase.

When the interval is short, the slope during the second phase is steep. This allows the activity to increase quickly so that the animal can produce a short interval. When the interval is longer, the slope is gentler and it takes longer to reach the time of response.

“As time goes by during the measurement, the animal knows that the interval that it has to produce is longer and therefore requires a shallower slope,” Jazayeri says.

Using this data, the researchers could correctly predict, based on the slope at the end of the measurement phase, when the animal would produce the “go” signal.

“Previous research has shown that some neurons exhibit a ramping up of their firing rate that culminates with the onset of a timed motor response. This research is exciting because it provides the first hint as to what may control the slope of this ‘neural ramping,’ specifically that the slope of the ramp may be determined by the firing rate at the beginning of the timed interval,” says Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles who was not involved in the research.

“A highly distributed problem”

All cognitive and motor functions rely on time to some extent. While LIP represents time during interval reproduction, Jazayeri believes that tracking time occurs throughout brain circuits that connect subcortical structures such as the thalamus, basal ganglia, and cerebellum to the cortex.

“Timing is going to be a highly distributed problem for the brain. There’s not going to be one place in the brain that does timing,” he says.

His lab is now pursuing several questions raised by this study. In one follow-up, the researchers are investigating how animals’ behavior and brain activity change based on their expectations for how long the first interval will last.

In another experiment, they are training animals to reproduce an interval that they get to measure twice. Preliminary results suggest that during the second interval, the animals refine the measurement they took during the first interval, allowing them to perform better than when they make just one measurement.

How the brain recognizes objects

When the eyes are open, visual information flows from the retina through the optic nerve and into the brain, which assembles this raw information into objects and scenes.

Scientists have previously hypothesized that objects are distinguished in the inferior temporal (IT) cortex, which is near the end of this flow of information, also called the ventral stream. A new study from MIT neuroscientists offers evidence that this is indeed the case.

Using data from both humans and nonhuman primates, the researchers found that neuron firing patterns in the IT cortex correlate strongly with success in object-recognition tasks.

“While we knew from prior work that neuronal population activity in inferior temporal cortex was likely to underlie visual object recognition, we did not have a predictive map that could accurately link that neural activity to object perception and behavior. The results from this study demonstrate that a particular map from particular aspects of IT population activity to behavior is highly accurate over all types of objects that were tested,” says James DiCarlo, head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and senior author of the study, which appears in the Journal of Neuroscience.

The paper’s lead author is Najib Majaj, a former postdoc in DiCarlo’s lab who is now at New York University. Other authors are former MIT graduate student Ha Hong and former MIT undergraduate Ethan Solomon.

Distinguishing objects

Earlier stops along the ventral stream are believed to process basic visual elements such as brightness and orientation. More complex functions take place farther along the stream, with object recognition believed to occur in the IT cortex.

To investigate this theory, the researchers first asked human subjects to perform 64 object-recognition tasks. Some of these tasks were “trivially easy,” Majaj says, such as distinguishing an apple from a car. Others — such as discriminating between two very similar faces — were so difficult that the subjects were correct only about 50 percent of the time.

After measuring human performance on these tasks, the researchers then showed the same set of nearly 6,000 images to nonhuman primates as they recorded electrical activity in neurons of the inferior temporal cortex and another visual region known as V4.

Each of the 168 IT neurons and 128 V4 neurons fired in response to some objects but not others, creating a firing pattern that served as a distinctive signature for each object. By comparing these signatures, the researchers could analyze whether they correlated to humans’ ability to distinguish between two objects.

The researchers found that the firing patterns of IT neurons, but not V4 neurons, perfectly predicted the human performances they had seen. That is, when humans had trouble distinguishing two objects, the neural signatures for those objects were so similar as to be indistinguishable, and for pairs where humans succeeded, the patterns were very different.

“On the easy stimuli, IT did as well as humans, and on the difficult stimuli, IT also failed,” Majaj says. “We had a nice correlation between behavior and neural responses.”

The findings support the hypothesis that patterns of neural activity in the IT cortex can encode object representations detailed enough to allow the brain to distinguish different objects, the researchers say.

Nikolaus Kriegeskorte, a principal investigator at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., agrees that the study offers “crucial evidence supporting the idea that inferior temporal cortex contains the neuronal representations underlying human visual object recognition.”

“This study is exemplary for its original and rigorous method of establishing links between brain representations and human behavioral performance,” adds Kriegeskorte, who was not part of the research team.

Model performance

The researchers also tested more than 10,000 other possible models for how the brain might encode object representations. These models varied based on location in the brain, the number of neurons required, and the time window for neural activity.

Some of these models, including some that relied on V4, were eliminated because they performed better than humans on some tasks and worse on others.

“We wanted the performance of the neurons to perfectly match the performance of the humans in terms of the pattern, so the easy tasks would be easy for the neural population and the hard tasks would be hard for the neural population,” Majaj says.

The research team now aims to gather even more data to ask if this model or similar models can predict the behavioral difficulty of object recognition on each and every visual image — an even higher bar than the one tested thus far. That might require additional factors to be included in the model that were not needed in this study, and thus could expose important gaps in scientists’ current understanding of neural representations of objects.

They also plan to expand the model so they can predict responses in IT based on input from earlier parts of the visual stream.

“We can start building a cascade of computational operations that take you from an image on the retina slowly through V1, V2, V4, until we’re able to predict the population in IT,” Majaj says.

How we make emotional decisions

Some decisions arouse far more anxiety than others. Among the most anxiety-provoking are those that involve options with both positive and negative elements, such choosing to take a higher-paying job in a city far from family and friends, versus choosing to stay put with less pay.

MIT researchers have now identified a neural circuit that appears to underlie decision-making in this type of situation, which is known as approach-avoidance conflict. The findings could help researchers to discover new ways to treat psychiatric disorders that feature impaired decision-making, such as depression, schizophrenia, and borderline personality disorder.

“In order to create a treatment for these types of disorders, we need to understand how the decision-making process is working,” says Alexander Friedman, a research scientist at MIT’s McGovern Institute for Brain Research and the lead author of a paper describing the findings in the May 28 issue of Cell.

Friedman and colleagues also demonstrated the first step toward developing possible therapies for these disorders: By manipulating this circuit in rodents, they were able to transform a preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

The paper’s senior author is Ann Graybiel, an MIT Institute Professor and member of the McGovern Institute. Other authors are postdoc Daigo Homma, research scientists Leif Gibb and Ken-ichi Amemori, undergraduates Samuel Rubin and Adam Hood, and technical assistant Michael Riad.

Making hard choices

The new study grew out of an effort to figure out the role of striosomes — clusters of cells distributed through the the striatum, a large brain region involved in coordinating movement and emotion and implicated in some human disorders. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

Previous studies from Graybiel’s lab identified regions of the brain’s prefrontal cortex that project to striosomes. These regions have been implicated in processing emotions, so the researchers suspected that this circuit might also be related to emotion.

To test this idea, the researchers studied mice as they performed five different types of behavioral tasks, including an approach-avoidance scenario. In that situation, rats running a maze had to choose between one option that included strong chocolate, which they like, and bright light, which they don’t, and an option with dimmer light but weaker chocolate.

When humans are forced to make these kinds of cost-benefit decisions, they usually experience anxiety, which influences the choices they make. “This type of task is potentially very relevant to anxiety disorders,” Gibb says. “If we could learn more about this circuitry, maybe we could help people with those disorders.”

The researchers also tested rats in four other scenarios in which the choices were easier and less fraught with anxiety.

“By comparing performance in these five tasks, we could look at cost-benefit decision-making versus other types of decision-making, allowing us to reach the conclusion that cost-benefit decision-making is unique,” Friedman says.

Using optogenetics, which allowed them to turn cortical input to the striosomes on or off by shining light on the cortical cells, the researchers found that the circuit connecting the cortex to the striosomes plays a causal role in influencing decisions in the approach-avoidance task, but none at all in other types of decision-making.

When the researchers shut off input to the striosomes from the cortex, they found that the rats began choosing the high-risk, high-reward option as much as 20 percent more often than they had previously chosen it. If the researchers stimulated input to the striosomes, the rats began choosing the high-cost, high-reward option less often.

Paul Glimcher, a professor of physiology and neuroscience at New York University, describes the study as a “masterpiece” and says he is particularly impressed by the use of a new technology, optogenetics, to solve a longstanding mystery. The study also opens up the possibility of studying striosome function in other types of decision-making, he adds.

“This cracks the 20-year puzzle that [Graybiel] wrote — what do the striosomes do?” says Glimcher, who was not part of the research team. “In 10 years we will have a much more complete picture, of which this paper is the foundational stone. She has demonstrated that we can answer this question, and answered it in one area. A lot of labs will now take this up and resolve it in other areas.”

Emotional gatekeeper

The findings suggest that the striatum, and the striosomes in particular, may act as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, the researchers say.

That gatekeeper circuit also appears to include a part of the midbrain called the substantia nigra, which has dopamine-containing cells that play an important role in motivation and movement. The researchers believe that when activated by input from the striosomes, these substantia nigra cells produce a long-term effect on an animal or human patient’s decision-making attitudes.

“We would so like to find a way to use these findings to relieve anxiety disorder, and other disorders in which mood and emotion are affected,” Graybiel says. “That kind of work has a real priority to it.”

In addition to pursuing possible treatments for anxiety disorders, the researchers are now trying to better understand the role of the dopamine-containing substantia nigra cells in this circuit, which plays a critical role in Parkinson’s disease and may also be involved in related disorders.

The research was funded by the National Institute of Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency, the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, and the William N. and Bernice E. Bumpus Foundation.

In one aspect of vision, computers catch up to primate brain

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.

Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

Because these networks are based on neuroscientists’ current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.

“The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain,” says DiCarlo, who is also a member of MIT’s McGovern Institute for Brain Research.

This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper’s lead author.

Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.

Inspired by the brain

Scientists began building neural networks in the 1970s in hopes of mimicking the brain’s ability to process visual information, recognize speech, and understand language.

For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.

“Each individual element is typically a very simple mathematical expression,” Cadieu says. “But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition.”

For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.

“Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart,” Cadieu says.

The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

More processing power

Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. “That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards,” Cadieu says.

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.

At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.

Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.

“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”

DiCarlo’s lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognizing three-dimensional forms. They also hope to create models that include the feedback projections seen in the human visual system. Current networks only model the “feedforward” projections from the retina to the IT cortex, but there are 10 times as many connections that go from IT cortex back to the rest of the system.

This work was supported by the National Eye Institute, the National Science Foundation, and the Defense Advanced Research Projects Agency.

Fifteen MIT scientists receive NIH BRAIN Initiative grants

Today, the National Institutes of Health (NIH) announced their first round of BRAIN Initiative award recipients. Six teams and 15 researchers from the Massachusetts Institute of Technology were recipients.

Mriganka Sur, principal investigator at the Picower Institute for Learning and Memory and the Paul E. Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) leads a team studying cortical circuits and information flow during memory-guided perceptual decisions. Co-principal investigators include Emery Brown, BCS professor of computational neuroscience and the Edward Hood Taplin Professor of Medical Engineering; Kwanghun Chung, Picower Institute principal investigator and assistant professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science (IMES); and Ian Wickersham, research scientist at the McGovern Institute for Brain Research and head of MIT’s Genetic Neuroengineering Group.

Elly Nedivi, Picower Institute principal investigator and professor in BCS and the Department of Biology, leads a team studying new methods for high-speed monitoring of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. Her co-principal investigator is Peter So, professor of mechanical and biological engineering, and director of the MIT Laser Biomedical Research Center.

Ian Wickersham will lead a team looking at novel technologies for nontoxic transsynaptic tracing. His co-principal investigators include Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience in BCS; Li-Huei Tsai, director of the Picower Institute and the Picower Professor of Neuroscience in BCS; and Kay Tye, Picower Institute principal investigator and assistant professor of neuroscience in BCS.

Robert Desimone will lead a team studying vascular interfaces for brain imaging and stimulation. Co-principal investigators include Ed Boyden, associate professor at the MIT Media Lab, McGovern Institute, and departments of BCS and Biological Engineering; head of MIT’s Synthetic Neurobiology Group, and co-director of MIT’s Center for Neurobiological Engineering; and Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology in IMES and director of the Harvard-MIT Biomedical Engineering Center. Collaborators on this project include: Rodolfo Llinas (New York University), George Church (Harvard University), Jan Rabaey (University of California at Berkeley), Pablo Blinder (Tel Aviv University), Eric Leuthardt (Washington University/St. Louis), Michel Maharbiz (Berkeley), Jose Carmena (Berkeley), Elad Alon (Berkeley), Colin Derdeyn (Washington University in St. Louis), Lowell Wood (Bill and Melinda Gates Foundation), Xue Han (Boston University), and Adam Marblestone (MIT).

Ed Boyden will be co-principal investigator with Mark Bathe, associate professor of biological engineering, and Peng Yin of Harvard on a project to study ultra-multiplexed nanoscale in situ proteomics for understanding synapse types.

Alan Jasanoff, associate professor of biological engineering and director of the MIT Center for Neurobiological Engineering, will lead a team looking at calcium sensors for molecular fMRI. Stephen Lippard, the Arthur Amos Noyes Professor of Chemistry, is co-principal investigator.

In addition, Sur and Wickersham also received BRAIN Early Concept Grants for Exploratory Research (EAGER) from the National Science Foundation (NSF). Sur will focus on massive-scale multi-area single neuron recordings to reveal circuits underlying short-term memory. Wickersham, in collaboration with Li-Huei Tsai, Kay Tye, and Robert Desimone, will develop cell-type specific optogenetics in wild-type animals. Additional information about NSF support of the BRAIN initiative can be found at NSF.gov/brain.

The BRAIN Initiative, spearheaded by President Obama in April 2013, challenges the nation’s leading scientists to advance our sophisticated understanding of the human mind and discover new ways to treat, prevent, and cure neurological disorders like Alzheimer’s, schizophrenia, autism, and traumatic brain injury. The scientific community is charged with accelerating the invention of cutting-edge technologies that can produce dynamic images of complex neural circuits and illuminate the interaction of lightning-fast brain cells. The new capabilities are expected to provide greater insights into how brain functionality is linked to behavior, learning, memory, and the underlying mechanisms of debilitating disease. BRAIN was launched with approximately $100 million in initial investments from the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency (DARPA).

BRAIN Initiative scientists are engaged in a challenging and transformative endeavor to explore how our minds instantaneously processes, store, and retrieve vast quantities of information. Their discoveries will unlock many of the remaining mysteries inherent in the brain’s billions of neurons and trillions of connections, leading to a deeper understanding of the underlying causes of many neurological and psychiatric conditions. Their findings will enable scientists and doctors to develop the groundbreaking arsenal of tools and technologies required to more effectively treat those suffering from these devastating disorders.

Controlling movement with light

For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.

Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.

In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.

Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.

Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.

“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.

Muscle control

Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.

To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.

When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.

“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”

“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.

Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.

“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

The research was funded by the Human Frontier Science Program and the National Science Foundation. Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience at MIT, is also an author of the paper.

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

Optogenetic toolkit goes multicolor

Optogenetics is a technique that allows scientists to control neurons’ electrical activity with light by engineering them to express light-sensitive proteins. Within the past decade, it has become a very powerful tool for discovering the functions of different types of cells in the brain.

Most of these light-sensitive proteins, known as opsins, respond to light in the blue-green range. Now, a team led by MIT has discovered an opsin that is sensitive to red light, which allows researchers to independently control the activity of two populations of neurons at once, enabling much more complex studies of brain function.

“If you want to see how two different sets of cells interact, or how two populations of the same cell compete against each other, you need to be able to activate those populations independently,” says Ed Boyden, a member of the McGovern Institute for Brain Research at MIT and a senior author of the new study.

The new opsin is one of about 60 light-sensitive proteins found in a screen of 120 species of algae. The study, which appears in the Feb. 9 online edition of Nature Methods, also yielded the fastest opsin, enabling researchers to study neuron activity patterns with millisecond timescale precision.

Boyden and Gane Ka-Shu Wong, a professor of medicine and biological sciences at the University of Alberta, are the paper’s senior authors, and the lead author is MIT postdoc Nathan Klapoetke. Researchers from the Howard Hughes Medical Institute’s Janelia Farm Research Campus, the University of Pennsylvania, the University of Cologne, and the Beijing Genomics Institute also contributed to the study.

In living color

Opsins occur naturally in many algae and bacteria, which use the light-sensitive proteins to help them respond to their environment and generate energy.

To achieve optical control of neurons, scientists engineer brain cells to express the gene for an opsin, which transports ions across the cell’s membrane to alter its voltage. Depending on the opsin used, shining light on the cell either lowers the voltage and silences neuron firing, or boosts voltage and provokes the cell to generate an electrical impulse. This effect is nearly instantaneous and easily reversible.

Using this approach, researchers can selectively turn a population of cells on or off and observe what happens in the brain. However, until now, they could activate only one population at a time, because the only opsins that responded to red light also responded to blue light, so they couldn’t be paired with other opsins to control two different cell populations.

To seek additional useful opsins, the MIT researchers worked with Wong’s team at the University of Alberta, which is sequencing the transcriptomes of 1,000 plants, including some algae. (The transcriptome is similar to the genome but includes only the genes that are expressed by a cell, not the entirety of its genetic material.)

Once the team obtained genetic sequences that appeared to code for opsins, Klapoetke tested their light-responsiveness in mammalian brain tissue, working with Martha Constantine-Paton, a professor of brain and cognitive sciences and of biology, a member of the McGovern Institute for Brain Research at MIT, and also an author of the paper. The red-light-sensitive opsin, which the researchers named Chrimson, can mediate neural activity in response to light with a 735-nanometer
wavelength.

The researchers also discovered a blue-light-driven opsin that has two highly desirable traits: It operates at high speed, and it is sensitive to very dim light. This opsin, called Chronos, can be stimulated with levels of blue light that are too weak to activate Chrimson.

“You can use short pulses of dim blue light to drive the blue one, and you can use strong red light to drive Chrimson, and that allows you to do true two-color, zero-cross-talk activation in intact brain tissue,” says Boyden, who is a member of MIT’s Media Lab and an associate professor of biological engineering and brain and cognitive sciences at MIT.

Researchers had previously tried to modify naturally occurring opsins to make them respond faster and react to dimmer light, but trying to optimize one feature often made other features worse.

“It was apparent that when trying to engineer traits like color, light sensitivity, and kinetics, there are always tradeoffs,” Klapoetke says. “We’re very lucky that something natural actually was more than several times faster and also five or six times more light-sensitive than anything else.”

Selective control

These new opsins lend themselves to several types of studies that were not possible before, Boyden says. For one, scientists could not only manipulate activity of a cell population of interest, but also control upstream cells that influence the target population by secreting neurotransmitters.

Pairing Chrimson and Chronos could also allow scientists to study the functions of different types of cells in the same microcircuit within the brain. Such cells are usually located very close together, but with the new opsins they can be controlled independently with two different colors of light.

“I think the tools described in this excellent paper represent a major advance for both basic and translational neuroscience,” says Botond Roska, a senior group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not part of the research team. “Optogenetic tools that are shifted towards the infrared range, such as Chrimson described in this paper, are much better than the more blue-shifted variants since these are less toxic, activate less the pupillary reflex, and activate less the remaining photoreceptors of patients.”

Most optogenetic studies thus far have been done in mice, but Chrimson could be used for optogenetic studies of fruit flies, a commonly used experimental organism. Researchers have had trouble using blue-light-sensitive opsins in fruit flies because the light can get into the flies’ eyes and startle them, interfering with the behavior being studied.

Vivek Jayaraman, a research group leader at Janelia Farms and an author of the paper, was able to show that this startle response does not occur when red light is used to stimulate Chrimson in fruit flies.

Because red light is less damaging to tissue than blue light, Chrimson also holds potential for eventual therapeutic use in humans, Boyden says. Animal studies with other opsins have shown promise in helping to restore vision after the loss of photoreceptor cells in the retina.

The researchers are now trying to modify Chrimson to respond to light in the infrared range. They are also working on making both Chrimson and Chronos faster and more light sensitive.

MIT’s portion of the project was funded by the National Institutes of Health, the MIT Media Lab, the National Science Foundation, the Wallace H. Coulter Foundation, the Alfred P. Sloan Foundation, a NARSAD Young Investigator Grant, the Human Frontiers Science Program, an NYSCF Robertson Neuroscience Investigator Award, the IET A.F. Harvey Prize, Janet and Sheldon Razin ’59, and the Skolkovo Institute of Science and Technology.

Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

When and where

Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.

Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.

To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.

In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and manmade objects. Each image was shown for half a second.

“We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula A. Martinos Imaging Center at the McGovern Institute.

Millisecond by millisecond

By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.

About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.

The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge University.

“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.

The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.

Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal, or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.

“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”

The research was funded by the National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation.