How maternal inflammation might lead to autism-like behavior

In 2010, a large study in Denmark found that women who suffered an infection severe enough to require hospitalization while pregnant were much more likely to have a child with autism (even though the overall risk of delivering a child with autism remained low).

Now research from MIT, the University of Massachusetts Medical School, the University of Colorado, and New York University Langone Medical Center reveals a possible mechanism for how this occurs. In a study of mice, the researchers found that immune cells activated in the mother during severe inflammation produce an immune effector molecule called IL-17 that appears to interfere with brain development.

The researchers also found that blocking this signal could restore normal behavior and brain structure.

“In the mice, we could treat the mother with antibodies that block IL-17 after inflammation had set in, and that could ameliorate some of the behavioral symptoms that were observed in the offspring. However, we don’t know yet how much of that could be translated into humans,” says Gloria Choi, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the lead author of the study, which appears in the Jan. 28 online edition of Science.

Finding the link

In the 2010 study, which included all children born in Denmark between 1980 and 2005, severe infections (requiring hospitalization) that correlated with autism risk included influenza, viral gastroenteritis, and urinary tract infections. Severe viral infections during the first trimester translated to a threefold risk for autism, and serious bacterial infections during the second trimester were linked with a 1.5-fold increase in risk.

Choi and her husband, Jun Huh, were graduate students at Caltech when they first heard about this study during a lecture by Caltech professor emeritus Paul Patterson, who had discovered that an immune signaling molecule called IL-6 plays a role in the link between infection and autism-like behaviors in rodents.

Huh, now an assistant professor at the University of Massachusetts Medical School and one of the paper’s senior authors, was studying immune cells called Th17 cells, which are well known for contributing to autoimmune disorders such as multiple sclerosis, inflammatory bowel diseases, and rheumatoid arthritis. He knew that Th17 cells are activated by IL-6, so he wondered if these cells might also be involved in cases of animal models of autism associated with maternal infection.

“We wanted to find the link,” Choi says. “How do you go all the way from the immune system in the mother to the child’s brain?”

Choi and Huh launched the study as postdocs at Columbia University and New York University School of Medicine, respectively. Working with Dan Littman, a professor of molecular immunology at NYU and one of the paper’s senior authors, they began by injecting pregnant mice with a synthetic analog of double-stranded RNA, which activates the immune system in a similar way to viruses.

Confirming the results of previous studies in mice, the researchers found behavioral abnormalities in the offspring of the infected mothers, including deficits in sociability, repetitive behaviors, and abnormal communication. They then disabled Th17 cells in the mothers before inducing inflammation and found that the offspring mice did not show those behavioral abnormalities. The abnormalities also disappeared when the researchers gave the infected mothers an antibody that blocks IL-17, which is produced by Th17 cells.

The researchers next asked how IL-17 might affect the developing fetus. They found that brain cells in the fetuses of mothers experiencing inflammation express receptors for IL-17, and they believe that exposure to the chemical provokes cells to produce even more receptors for IL-17, amplifying its effects.

In the developing mice, the researchers found irregularities in the normally well-defined layers of cells in the brain’s cortex, where most cognition and sensory processing take place. These patches of irregular structure appeared in approximately the same cortical regions in all of the affected offspring, but they did not occur when the mothers’ Th17 cells were blocked.

Disorganized cortical layers have also been found in studies of human patients with autism.

Preventing autism

The researchers are now investigating whether and how these cortical patches produce the behavioral abnormalities seen in the offspring.

“We’ve shown correlation between these cortical patches and behavioral abnormalities, but we don’t know whether the cortical patches actually are responsible for the behavioral abnormalities,” Choi says. “And if it is responsible, what is being dysregulated within this patch to produce this behavior?”

The researchers hope their work may lead to a way to reduce the chances of autism developing in the children of women who experience severe infections during pregnancy. They also plan to investigate whether genetic makeup influences mice’s susceptibility to maternal inflammation, because autism is known to have a very strong genetic component.

Charles Hoeffer, a professor of integrative physiology at the University of Colorado, is a senior author of the paper, and other authors include MIT postdoc Yeong Yim, NYU graduate student Helen Wong, UMass Medical School visiting scholars Sangdoo Kim and Hyunju Kim, and NYU postdoc Sangwon Kim.

Study finds altered brain chemistry in people with autism

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Singing in the brain

Male zebra finches, small songbirds native to central Australia, learn their songs by copying what they hear from their fathers. These songs, often used as mating calls, develop early in life as juvenile birds experiment with mimicking the sounds they hear.

MIT neuroscientists have now uncovered the brain activity that supports this learning process. Sequences of neural activity that encode the birds’ first song syllable are duplicated and altered slightly, allowing the birds to produce several variations on the original syllable. Eventually these syllables are strung together into the bird’s signature song, which remains constant for life.

“The advantage here is that in order to learn new syllables, you don’t have to learn them from scratch. You can reuse what you’ve learned and modify it slightly. We think it’s an efficient way to learn various types of syllables,” says Tatsuo Okubo, a former MIT graduate student and lead author of the study, which appears in the Nov. 30 online edition of Nature.

Okubo and his colleagues believe that this type of neural sequence duplication may also underlie other types of motor learning. For example, the sequence used to swing a tennis racket might be repurposed for a similar motion such as playing Ping-Pong. “This seems like a way that sequences might be learned and reused for anything that involves timing,” says Emily Mackevicius, an MIT graduate student who is also an author of the paper.

The paper’s senior author is Michale Fee, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Bursting into song

Previous studies from Fee’s lab have found that a part of the brain’s cortex known as the HVC is critical for song production.

Typically, each song lasts for about one second and consists of multiple syllables. Fee’s lab has found that in adult birds, individual HVC neurons show a very brief burst of activity — about 10 milliseconds or less — at one moment during the song. Different sets of neurons are active at different times, and collectively the song is represented by this sequence of bursts.

In the new Nature study, the researchers wanted to figure out how those neural patterns develop in newly hatched zebra finches. To do that, they recorded electrical activity in HVC neurons for up to three months after the birds hatched.

When zebra finches begin to sing, about 30 days after hatching, they produce only nonsense syllables known as subsong, similar to the babble of human babies. At first, the duration of these syllables is highly variable, but after a week or so they turn into more consistent sounds called protosyllables, which last about 100 milliseconds. Each bird learns one protosyllable that forms a scaffold for subsequent syllables.

The researchers found that within the HVC, neurons fire in a sequence of short bursts corresponding to the first protosyllable that each bird learns. Most of the neurons in the HVC participate in this original sequence, but as time goes by, some of these neurons are extracted from the original sequence and produce a new, very similar sequence. This chain of neural sequences can be repurposed to produce different syllables.

“From that short sequence it splits into new sequences for the next new syllables,” Mackevicius says. “It starts with that short chain that has a lot of redundancy in it, and splits off some neurons for syllable A and some neurons for syllable B.”

This splitting of neural sequences happens repeatedly until the birds can produce between three and seven different syllables, the researchers found. This entire process takes about two months, at which point each bird has settled on its final song.

Evolution by duplication

The researchers note that this process is similar to what is believed to drive the production of new genes and traits during evolution.

“If you duplicate a gene, then you could have separate mutations in both copies of the gene and they could eventually do different functions,” Okubo says. “It’s similar with motor programs. You can duplicate the sequence and then independently modify the two daughter motor programs so that they can now each do slightly different things.”

Mackevicius is now studying how input from sound-processing parts of the brain to the HVC contributes to the formation of these neural sequences.

To locate objects, brain relies on memory

Imagine you are looking for your wallet on a cluttered desk. As you scan the area, you hold in your mind a mental picture of what your wallet looks like.

MIT neuroscientists have now identified a brain region that stores this type of visual representation during a search. The researchers also found that this region sends signals to the parts of the brain that control eye movements, telling individuals where to look next.

This region, known as the ventral pre-arcuate (VPA), is critical for what the researchers call “feature attention,” which allows the brain to seek objects based on their specific properties. Most previous studies of how the brain pays attention have investigated a different type of attention known as spatial attention — that is, what happens when the brain focuses on a certain location.

“The way that people go about their lives most of the time, they don’t know where things are in advance. They’re paying attention to things based on their features,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “In the morning you’re trying to find your car keys so you can go to work. How do you do that? You don’t look at every pixel in your house. You have to use your knowledge of what your car keys look like.”

Desimone, also the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences, is the senior author of a paper describing the findings in the Oct. 29 online edition of Neuron. The paper’s lead author is Narcisse Bichot, a research scientist at the McGovern Institute. Other authors are Matthew Heard, a former research technician, and Ellen DeGennaro, a graduate student in the Harvard-MIT Division of Health Sciences and Technology.

Visual targets

The researchers focused on the VPA in part because of its extensive connections with the brain’s frontal eye fields, which control eye movements. Located in the prefrontal cortex, the VPA has previously been linked with working memory — a cognitive ability that helps us to gather and coordinate information while performing tasks such as solving a math problem or participating in a conversation.

“There have been a lot of studies showing that this region of the cortex is heavily involved in working memory,” Bichot says. “If you have to remember something, cells in these areas are involved in holding the memory of that object for the purpose of identifying it later.”

In the new study, the researchers found that the VPA also holds what they call an “attentional template” — that is, a memory of the item being sought.

In this study, the researchers first showed monkeys a target object, such as a human face, a banana, or a butterfly. After a delay, they showed an array of objects that included the target. When the animal fixed its gaze on the target object, it received a reward. “The animals can look around as long as they want until they find what they’re looking for,” Bichot says.

As the animals performed the task, the researchers recorded electrical activity from neurons in the VPA. Each object produced a distinctive pattern of neural activity, and the neurons that encoded a representation of the target object stayed active until a match was found, prompting the neurons to fire even more.

“When the target object finally enters their receptive fields, they give enhanced responses,” Desimone says. “That’s the signal that the thing they’re looking for is actually there.”

About 20 to 30 milliseconds after the VPA cells respond to the target object, they send a signal to the frontal eye fields, which direct the eyes to lock onto the target.

When the researchers blocked VPA activity, they found that although the animals could still move their eyes around in search of the target object, they could not find it. “Presumably it’s because they’ve lost this mechanism for telling them where the likely target is,” Desimone says.

Focused attention

The researchers believe the VPA may be the equivalent in nonhuman primates of a human brain region called the inferior frontal junction (IFJ). Last year Desimone and postdoc Daniel Baldauf found that the IFJ holds onto the idea of a target object — in that study, either faces or houses — and then directs the correct part of the brain to look for the target.

The researchers are now studying how the VPA interacts with a nearby region called the VPS, which appears to be more important for tasks in which attention must be switched quickly from one object to another. They are also performing additional studies of human attention, in hopes of learning more about disorders such as Attention Deficit Hyperactivity Disorder and other attention disorders.

“There’s really an opportunity there to understand something important about the role of the prefrontal cortex in both normal behavior and in brain disorders,” Desimone says.

How the brain keeps time

Keeping track of time is critical for many tasks, such as playing the piano, swinging a tennis racket, or holding a conversation. Neuroscientists at MIT and Columbia University have now figured out how neurons in one part of the brain measure time intervals and accurately reproduce them.

The researchers found the lateral intraparietal cortex (LIP), which plays a role in sensorimotor function, represents elapsed time, as animals measure and then reproduce a time interval. They also demonstrated how the firing patterns of population of neurons in the LIP could coordinate sensory and motor aspects of timing.

LIP is likely just one node in a circuit that measures time, says Mehrdad Jazayeri, the lead author of a paper describing the work in the Oct. 8 issue of Current Biology.

“I would not conclude that the parietal cortex is the timer,” says Jazayeri, an assistant professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research. “What we are doing is discovering computational principles that explain how neurons’ firing rates evolve with time, and how that relates to the animals’ behavior in single trials. We can explain mathematically what’s going on.”

The paper’s senior author is Michael Shadlen, a professor of neuroscience and member of the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University.

As time goes by

Jazayeri, who joined the MIT faculty in 2013, began studying timing in the brain several years ago while a postdoc at the University of Washington. He began by testing humans’ ability to measure and reproduce time using a task called “ready, set, go.” In this experiment, the subject measures the time between two flashes (“ready” and “set”) and then presses a button (“go”) at the appropriate time — that is, after the same amount of time that separated the “ready” and “set.”

From these studies, he discovered that people do not simply measure an interval and then reproduce it. Rather, after measuring an interval they combine that measurement, which is imprecise, with their prior knowledge of what the interval could have been. This prior knowledge, which builds up as they repeat the task many times, allows people to reproduce the interval more accurately.

“When people reproduce time, they don’t seem to use a timer,” Jazayeri says. “It’s an active act of probabilistic inference that goes on.”

To find out what happens in the brain during this process, Jazayeri recorded neuronal activity in the LIP of monkeys trained to perform the same task. In these recordings, he found distinctive patterns in the measurement phase (the interval between “ready” and “set”), and the production phase (the interval between “set” and “go”).

During the measurement phase, neuron activity increases, but not linearly. Instead, the slope of activity begins as a steep curve that gradually flattens out as time goes by, until the “set” signal is given. This is key because the slope at the end of the measurement interval predicts the slope of activity in the production phase.

When the interval is short, the slope during the second phase is steep. This allows the activity to increase quickly so that the animal can produce a short interval. When the interval is longer, the slope is gentler and it takes longer to reach the time of response.

“As time goes by during the measurement, the animal knows that the interval that it has to produce is longer and therefore requires a shallower slope,” Jazayeri says.

Using this data, the researchers could correctly predict, based on the slope at the end of the measurement phase, when the animal would produce the “go” signal.

“Previous research has shown that some neurons exhibit a ramping up of their firing rate that culminates with the onset of a timed motor response. This research is exciting because it provides the first hint as to what may control the slope of this ‘neural ramping,’ specifically that the slope of the ramp may be determined by the firing rate at the beginning of the timed interval,” says Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles who was not involved in the research.

“A highly distributed problem”

All cognitive and motor functions rely on time to some extent. While LIP represents time during interval reproduction, Jazayeri believes that tracking time occurs throughout brain circuits that connect subcortical structures such as the thalamus, basal ganglia, and cerebellum to the cortex.

“Timing is going to be a highly distributed problem for the brain. There’s not going to be one place in the brain that does timing,” he says.

His lab is now pursuing several questions raised by this study. In one follow-up, the researchers are investigating how animals’ behavior and brain activity change based on their expectations for how long the first interval will last.

In another experiment, they are training animals to reproduce an interval that they get to measure twice. Preliminary results suggest that during the second interval, the animals refine the measurement they took during the first interval, allowing them to perform better than when they make just one measurement.

How the brain recognizes objects

When the eyes are open, visual information flows from the retina through the optic nerve and into the brain, which assembles this raw information into objects and scenes.

Scientists have previously hypothesized that objects are distinguished in the inferior temporal (IT) cortex, which is near the end of this flow of information, also called the ventral stream. A new study from MIT neuroscientists offers evidence that this is indeed the case.

Using data from both humans and nonhuman primates, the researchers found that neuron firing patterns in the IT cortex correlate strongly with success in object-recognition tasks.

“While we knew from prior work that neuronal population activity in inferior temporal cortex was likely to underlie visual object recognition, we did not have a predictive map that could accurately link that neural activity to object perception and behavior. The results from this study demonstrate that a particular map from particular aspects of IT population activity to behavior is highly accurate over all types of objects that were tested,” says James DiCarlo, head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and senior author of the study, which appears in the Journal of Neuroscience.

The paper’s lead author is Najib Majaj, a former postdoc in DiCarlo’s lab who is now at New York University. Other authors are former MIT graduate student Ha Hong and former MIT undergraduate Ethan Solomon.

Distinguishing objects

Earlier stops along the ventral stream are believed to process basic visual elements such as brightness and orientation. More complex functions take place farther along the stream, with object recognition believed to occur in the IT cortex.

To investigate this theory, the researchers first asked human subjects to perform 64 object-recognition tasks. Some of these tasks were “trivially easy,” Majaj says, such as distinguishing an apple from a car. Others — such as discriminating between two very similar faces — were so difficult that the subjects were correct only about 50 percent of the time.

After measuring human performance on these tasks, the researchers then showed the same set of nearly 6,000 images to nonhuman primates as they recorded electrical activity in neurons of the inferior temporal cortex and another visual region known as V4.

Each of the 168 IT neurons and 128 V4 neurons fired in response to some objects but not others, creating a firing pattern that served as a distinctive signature for each object. By comparing these signatures, the researchers could analyze whether they correlated to humans’ ability to distinguish between two objects.

The researchers found that the firing patterns of IT neurons, but not V4 neurons, perfectly predicted the human performances they had seen. That is, when humans had trouble distinguishing two objects, the neural signatures for those objects were so similar as to be indistinguishable, and for pairs where humans succeeded, the patterns were very different.

“On the easy stimuli, IT did as well as humans, and on the difficult stimuli, IT also failed,” Majaj says. “We had a nice correlation between behavior and neural responses.”

The findings support the hypothesis that patterns of neural activity in the IT cortex can encode object representations detailed enough to allow the brain to distinguish different objects, the researchers say.

Nikolaus Kriegeskorte, a principal investigator at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., agrees that the study offers “crucial evidence supporting the idea that inferior temporal cortex contains the neuronal representations underlying human visual object recognition.”

“This study is exemplary for its original and rigorous method of establishing links between brain representations and human behavioral performance,” adds Kriegeskorte, who was not part of the research team.

Model performance

The researchers also tested more than 10,000 other possible models for how the brain might encode object representations. These models varied based on location in the brain, the number of neurons required, and the time window for neural activity.

Some of these models, including some that relied on V4, were eliminated because they performed better than humans on some tasks and worse on others.

“We wanted the performance of the neurons to perfectly match the performance of the humans in terms of the pattern, so the easy tasks would be easy for the neural population and the hard tasks would be hard for the neural population,” Majaj says.

The research team now aims to gather even more data to ask if this model or similar models can predict the behavioral difficulty of object recognition on each and every visual image — an even higher bar than the one tested thus far. That might require additional factors to be included in the model that were not needed in this study, and thus could expose important gaps in scientists’ current understanding of neural representations of objects.

They also plan to expand the model so they can predict responses in IT based on input from earlier parts of the visual stream.

“We can start building a cascade of computational operations that take you from an image on the retina slowly through V1, V2, V4, until we’re able to predict the population in IT,” Majaj says.

How we make emotional decisions

Some decisions arouse far more anxiety than others. Among the most anxiety-provoking are those that involve options with both positive and negative elements, such choosing to take a higher-paying job in a city far from family and friends, versus choosing to stay put with less pay.

MIT researchers have now identified a neural circuit that appears to underlie decision-making in this type of situation, which is known as approach-avoidance conflict. The findings could help researchers to discover new ways to treat psychiatric disorders that feature impaired decision-making, such as depression, schizophrenia, and borderline personality disorder.

“In order to create a treatment for these types of disorders, we need to understand how the decision-making process is working,” says Alexander Friedman, a research scientist at MIT’s McGovern Institute for Brain Research and the lead author of a paper describing the findings in the May 28 issue of Cell.

Friedman and colleagues also demonstrated the first step toward developing possible therapies for these disorders: By manipulating this circuit in rodents, they were able to transform a preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

The paper’s senior author is Ann Graybiel, an MIT Institute Professor and member of the McGovern Institute. Other authors are postdoc Daigo Homma, research scientists Leif Gibb and Ken-ichi Amemori, undergraduates Samuel Rubin and Adam Hood, and technical assistant Michael Riad.

Making hard choices

The new study grew out of an effort to figure out the role of striosomes — clusters of cells distributed through the the striatum, a large brain region involved in coordinating movement and emotion and implicated in some human disorders. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

Previous studies from Graybiel’s lab identified regions of the brain’s prefrontal cortex that project to striosomes. These regions have been implicated in processing emotions, so the researchers suspected that this circuit might also be related to emotion.

To test this idea, the researchers studied mice as they performed five different types of behavioral tasks, including an approach-avoidance scenario. In that situation, rats running a maze had to choose between one option that included strong chocolate, which they like, and bright light, which they don’t, and an option with dimmer light but weaker chocolate.

When humans are forced to make these kinds of cost-benefit decisions, they usually experience anxiety, which influences the choices they make. “This type of task is potentially very relevant to anxiety disorders,” Gibb says. “If we could learn more about this circuitry, maybe we could help people with those disorders.”

The researchers also tested rats in four other scenarios in which the choices were easier and less fraught with anxiety.

“By comparing performance in these five tasks, we could look at cost-benefit decision-making versus other types of decision-making, allowing us to reach the conclusion that cost-benefit decision-making is unique,” Friedman says.

Using optogenetics, which allowed them to turn cortical input to the striosomes on or off by shining light on the cortical cells, the researchers found that the circuit connecting the cortex to the striosomes plays a causal role in influencing decisions in the approach-avoidance task, but none at all in other types of decision-making.

When the researchers shut off input to the striosomes from the cortex, they found that the rats began choosing the high-risk, high-reward option as much as 20 percent more often than they had previously chosen it. If the researchers stimulated input to the striosomes, the rats began choosing the high-cost, high-reward option less often.

Paul Glimcher, a professor of physiology and neuroscience at New York University, describes the study as a “masterpiece” and says he is particularly impressed by the use of a new technology, optogenetics, to solve a longstanding mystery. The study also opens up the possibility of studying striosome function in other types of decision-making, he adds.

“This cracks the 20-year puzzle that [Graybiel] wrote — what do the striosomes do?” says Glimcher, who was not part of the research team. “In 10 years we will have a much more complete picture, of which this paper is the foundational stone. She has demonstrated that we can answer this question, and answered it in one area. A lot of labs will now take this up and resolve it in other areas.”

Emotional gatekeeper

The findings suggest that the striatum, and the striosomes in particular, may act as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, the researchers say.

That gatekeeper circuit also appears to include a part of the midbrain called the substantia nigra, which has dopamine-containing cells that play an important role in motivation and movement. The researchers believe that when activated by input from the striosomes, these substantia nigra cells produce a long-term effect on an animal or human patient’s decision-making attitudes.

“We would so like to find a way to use these findings to relieve anxiety disorder, and other disorders in which mood and emotion are affected,” Graybiel says. “That kind of work has a real priority to it.”

In addition to pursuing possible treatments for anxiety disorders, the researchers are now trying to better understand the role of the dopamine-containing substantia nigra cells in this circuit, which plays a critical role in Parkinson’s disease and may also be involved in related disorders.

The research was funded by the National Institute of Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency, the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, and the William N. and Bernice E. Bumpus Foundation.

In one aspect of vision, computers catch up to primate brain

For decades, neuroscientists have been trying to design computer networks that can mimic visual skills such as recognizing objects, which the human brain does very accurately and quickly.

Until now, no computer model has been able to match the primate brain at visual object recognition during a brief glance. However, a new study from MIT neuroscientists has found that one of the latest generation of these so-called “deep neural networks” matches the primate brain.

Because these networks are based on neuroscientists’ current understanding of how the brain performs object recognition, the success of the latest networks suggest that neuroscientists have a fairly accurate grasp of how object recognition works, says James DiCarlo, a professor of neuroscience and head of MIT’s Department of Brain and Cognitive Sciences and the senior author of a paper describing the study in the Dec. 11 issue of the journal PLoS Computational Biology.

“The fact that the models predict the neural responses and the distances of objects in neural population space shows that these models encapsulate our current best understanding as to what is going on in this previously mysterious portion of the brain,” says DiCarlo, who is also a member of MIT’s McGovern Institute for Brain Research.

This improved understanding of how the primate brain works could lead to better artificial intelligence and, someday, new ways to repair visual dysfunction, adds Charles Cadieu, a postdoc at the McGovern Institute and the paper’s lead author.

Other authors are graduate students Ha Hong and Diego Ardila, research scientist Daniel Yamins, former MIT graduate student Nicolas Pinto, former MIT undergraduate Ethan Solomon, and research affiliate Najib Majaj.

Inspired by the brain

Scientists began building neural networks in the 1970s in hopes of mimicking the brain’s ability to process visual information, recognize speech, and understand language.

For vision-based neural networks, scientists were inspired by the hierarchical representation of visual information in the brain. As visual input flows from the retina into primary visual cortex and then inferotemporal (IT) cortex, it is processed at each level and becomes more specific until objects can be identified.

To mimic this, neural network designers create several layers of computation in their models. Each level performs a mathematical operation, such as a linear dot product. At each level, the representations of the visual object become more and more complex, and unneeded information, such as an object’s location or movement, is cast aside.

“Each individual element is typically a very simple mathematical expression,” Cadieu says. “But when you combine thousands and millions of these things together, you get very complicated transformations from the raw signals into representations that are very good for object recognition.”

For this study, the researchers first measured the brain’s object recognition ability. Led by Hong and Majaj, they implanted arrays of electrodes in the IT cortex as well as in area V4, a part of the visual system that feeds into the IT cortex. This allowed them to see the neural representation — the population of neurons that respond — for every object that the animals looked at.

The researchers could then compare this with representations created by the deep neural networks, which consist of a matrix of numbers produced by each computational element in the system. Each image produces a different array of numbers. The accuracy of the model is determined by whether it groups similar objects into similar clusters within the representation.

“Through each of these computational transformations, through each of these layers of networks, certain objects or images get closer together, while others get further apart,” Cadieu says.

The best network was one that was developed by researchers at New York University, which classified objects as well as the macaque brain.

More processing power

Two major factors account for the recent success of this type of neural network, Cadieu says. One is a significant leap in the availability of computational processing power. Researchers have been taking advantage of graphical processing units (GPUs), which are small chips designed for high performance in processing the huge amount of visual content needed for video games. “That is allowing people to push the envelope in terms of computation by buying these relatively inexpensive graphics cards,” Cadieu says.

The second factor is that researchers now have access to large datasets to feed the algorithms to “train” them. These datasets contain millions of images, and each one is annotated by humans with different levels of identification. For example, a photo of a dog would be labeled as animal, canine, domesticated dog, and the breed of dog.

At first, neural networks are not good at identifying these images, but as they see more and more images, and find out when they were wrong, they refine their calculations until they become much more accurate at identifying objects.

Cadieu says that researchers don’t know much about what exactly allows these networks to distinguish different objects.

“That’s a pro and a con,” he says. “It’s very good in that we don’t have to really know what the things are that distinguish those objects. But the big con is that it’s very hard to inspect those networks, to look inside and see what they really did. Now that people can see that these things are working well, they’ll work more to understand what’s happening inside of them.”

DiCarlo’s lab now plans to try to generate models that can mimic other aspects of visual processing, including tracking motion and recognizing three-dimensional forms. They also hope to create models that include the feedback projections seen in the human visual system. Current networks only model the “feedforward” projections from the retina to the IT cortex, but there are 10 times as many connections that go from IT cortex back to the rest of the system.

This work was supported by the National Eye Institute, the National Science Foundation, and the Defense Advanced Research Projects Agency.

Fifteen MIT scientists receive NIH BRAIN Initiative grants

Today, the National Institutes of Health (NIH) announced their first round of BRAIN Initiative award recipients. Six teams and 15 researchers from the Massachusetts Institute of Technology were recipients.

Mriganka Sur, principal investigator at the Picower Institute for Learning and Memory and the Paul E. Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) leads a team studying cortical circuits and information flow during memory-guided perceptual decisions. Co-principal investigators include Emery Brown, BCS professor of computational neuroscience and the Edward Hood Taplin Professor of Medical Engineering; Kwanghun Chung, Picower Institute principal investigator and assistant professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science (IMES); and Ian Wickersham, research scientist at the McGovern Institute for Brain Research and head of MIT’s Genetic Neuroengineering Group.

Elly Nedivi, Picower Institute principal investigator and professor in BCS and the Department of Biology, leads a team studying new methods for high-speed monitoring of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. Her co-principal investigator is Peter So, professor of mechanical and biological engineering, and director of the MIT Laser Biomedical Research Center.

Ian Wickersham will lead a team looking at novel technologies for nontoxic transsynaptic tracing. His co-principal investigators include Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience in BCS; Li-Huei Tsai, director of the Picower Institute and the Picower Professor of Neuroscience in BCS; and Kay Tye, Picower Institute principal investigator and assistant professor of neuroscience in BCS.

Robert Desimone will lead a team studying vascular interfaces for brain imaging and stimulation. Co-principal investigators include Ed Boyden, associate professor at the MIT Media Lab, McGovern Institute, and departments of BCS and Biological Engineering; head of MIT’s Synthetic Neurobiology Group, and co-director of MIT’s Center for Neurobiological Engineering; and Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology in IMES and director of the Harvard-MIT Biomedical Engineering Center. Collaborators on this project include: Rodolfo Llinas (New York University), George Church (Harvard University), Jan Rabaey (University of California at Berkeley), Pablo Blinder (Tel Aviv University), Eric Leuthardt (Washington University/St. Louis), Michel Maharbiz (Berkeley), Jose Carmena (Berkeley), Elad Alon (Berkeley), Colin Derdeyn (Washington University in St. Louis), Lowell Wood (Bill and Melinda Gates Foundation), Xue Han (Boston University), and Adam Marblestone (MIT).

Ed Boyden will be co-principal investigator with Mark Bathe, associate professor of biological engineering, and Peng Yin of Harvard on a project to study ultra-multiplexed nanoscale in situ proteomics for understanding synapse types.

Alan Jasanoff, associate professor of biological engineering and director of the MIT Center for Neurobiological Engineering, will lead a team looking at calcium sensors for molecular fMRI. Stephen Lippard, the Arthur Amos Noyes Professor of Chemistry, is co-principal investigator.

In addition, Sur and Wickersham also received BRAIN Early Concept Grants for Exploratory Research (EAGER) from the National Science Foundation (NSF). Sur will focus on massive-scale multi-area single neuron recordings to reveal circuits underlying short-term memory. Wickersham, in collaboration with Li-Huei Tsai, Kay Tye, and Robert Desimone, will develop cell-type specific optogenetics in wild-type animals. Additional information about NSF support of the BRAIN initiative can be found at NSF.gov/brain.

The BRAIN Initiative, spearheaded by President Obama in April 2013, challenges the nation’s leading scientists to advance our sophisticated understanding of the human mind and discover new ways to treat, prevent, and cure neurological disorders like Alzheimer’s, schizophrenia, autism, and traumatic brain injury. The scientific community is charged with accelerating the invention of cutting-edge technologies that can produce dynamic images of complex neural circuits and illuminate the interaction of lightning-fast brain cells. The new capabilities are expected to provide greater insights into how brain functionality is linked to behavior, learning, memory, and the underlying mechanisms of debilitating disease. BRAIN was launched with approximately $100 million in initial investments from the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency (DARPA).

BRAIN Initiative scientists are engaged in a challenging and transformative endeavor to explore how our minds instantaneously processes, store, and retrieve vast quantities of information. Their discoveries will unlock many of the remaining mysteries inherent in the brain’s billions of neurons and trillions of connections, leading to a deeper understanding of the underlying causes of many neurological and psychiatric conditions. Their findings will enable scientists and doctors to develop the groundbreaking arsenal of tools and technologies required to more effectively treat those suffering from these devastating disorders.

Controlling movement with light

For the first time, MIT neuroscientists have shown they can control muscle movement by applying optogenetics — a technique that allows scientists to control neurons’ electrical impulses with light — to the spinal cords of animals that are awake and alert.

Led by MIT Institute Professor Emilio Bizzi, the researchers studied mice in which a light-sensitive protein that promotes neural activity was inserted into a subset of spinal neurons. When the researchers shone blue light on the animals’ spinal cords, their hind legs were completely but reversibly immobilized. The findings, described in the June 25 issue of PLoS One, offer a new approach to studying the complex spinal circuits that coordinate movement and sensory processing, the researchers say.

In this study, Bizzi and Vittorio Caggiano, a postdoc at MIT’s McGovern Institute for Brain Research, used optogenetics to explore the function of inhibitory interneurons, which form circuits with many other neurons in the spinal cord. These circuits execute commands from the brain, with additional input from sensory information from the limbs.

Previously, neuroscientists have used electrical stimulation or pharmacological intervention to control neurons’ activity and try to tease out their function. Those approaches have revealed a great deal of information about spinal control, but they do not offer precise enough control to study specific subsets of neurons.

Optogenetics, on the other hand, allows scientists to control specific types of neurons by genetically programming them to express light-sensitive proteins. These proteins, called opsins, act as ion channels or pumps that regulate neurons’ electrical activity. Some opsins suppress activity when light shines on them, while others stimulate it.

“With optogenetics, you are attacking a system of cells that have certain characteristics similar to each other. It’s a big shift in terms of our ability to understand how the system works,” says Bizzi, who is a member of MIT’s McGovern Institute.

Muscle control

Inhibitory neurons in the spinal cord suppress muscle contractions, which is critical for maintaining balance and for coordinating movement. For example, when you raise an apple to your mouth, the biceps contract while the triceps relax. Inhibitory neurons are also thought to be involved in the state of muscle inhibition that occurs during the rapid eye movement (REM) stage of sleep.

To study the function of inhibitory neurons in more detail, the researchers used mice developed by Guoping Feng, the Poitras Professor of Neuroscience at MIT, in which all inhibitory spinal neurons were engineered to express an opsin called channelrhodopsin 2. This opsin stimulates neural activity when exposed to blue light. They then shone light at different points along the spine to observe the effects of neuron activation.

When inhibitory neurons in a small section of the thoracic spine were activated in freely moving mice, all hind-leg movement ceased. This suggests that inhibitory neurons in the thoracic spine relay the inhibition all the way to the end of the spine, Caggiano says. The researchers also found that activating inhibitory neurons had no effect on the transmission of sensory information from the limbs to the brain, or on normal reflexes.

“The spinal location where we found this complete suppression was completely new,” Caggiano says. “It has not been shown by any other scientists that there is this front-to-back suppression that affects only motor behavior without affecting sensory behavior.”

“It’s a compelling use of optogenetics that raises a lot of very interesting questions,” says Simon Giszter, a professor of neurobiology and anatomy at Drexel University who was not part of the research team. Among those questions is whether this mechanism behaves as a global “kill switch,” or if the inhibitory neurons form modules that allow for more selective suppression of movement patterns.

Now that they have demonstrated the usefulness of optogenetics for this type of study, the MIT team hopes to explore the roles of other types of spinal cord neurons. They also plan to investigate how input from the brain influences these spinal circuits.

“There’s huge interest in trying to extend these studies and dissect these circuits because we tackled only the inhibitory system in a very global way,” Caggiano says. “Further studies will highlight the contribution of single populations of neurons in the spinal cord for the control of limbs and control of movement.”

The research was funded by the Human Frontier Science Program and the National Science Foundation. Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience at MIT, is also an author of the paper.