Ed Boyden wins 2016 Breakthrough Prize in Life Sciences

MIT researchers took home several awards last night at the 2016 Breakthrough Prize ceremony at NASA’s Ames Research Center in Mountain View, California.

Edward Boyden, an associate professor of media arts and sciences, biological engineering, and brain and cognitive sciences, was one of five scientists honored with the Breakthrough Prize in Life Sciences, given for “transformative advances toward understanding living systems and extending human life.” He will receive $3 million for the award.

MIT physicists also contributed to a project that won the Breakthrough Prize in Fundamental Physics. That prize went to five experiments investigating the oscillation of subatomic particles known as neutrinos. More than 1,300 contributing physicists will share in the recognition for their work, according to the award announcement. Those physicists include MIT associate professor of physics Joseph Formaggio and his team, as well as MIT assistant professor of physics Lindley Winslow.

Larry Guth, an MIT professor of mathematics, was honored with the New Horizons in Mathematics Prize, which is given to promising junior researchers who have already produced important work in mathematics. Liang Fu, an assistant professor of physics, was honored with the New Horizons in Physics Prize, which is awarded to promising junior researchers who have already produced important work in fundamental physics.

“By challenging conventional thinking and expanding knowledge over the long term, scientists can solve the biggest problems of our time,” said Mark Zuckerberg, chairman and CEO of Facebook, and one of the prizes’ founders. “The Breakthrough Prize honors achievements in science and math so we can encourage more pioneering research and celebrate scientists as the heroes they truly are.”

Optogenetics

Boyden was honored for the development and implementation of optogenetics, a technique in which scientists can control neurons by shining light on them. Karl Deisseroth, a Stanford University professor who worked with Boyden to pioneer the technique, was also honored with one of the life sciences prizes.

Optogenetics relies on light-sensitive proteins, originally isolated from bacteria and algae. About 10 years ago, Boyden and Deisseroth began engineering neurons to express these proteins, allowing them to selectively stimulate or silence them with pulses of light. More recently, Boyden has developed additional proteins that are even more sensitive to light and can respond to different colors.

Scientists around the world have used optogenetics to reveal the brain circuitry underlying normal neural function as well as neurological disorders such as autism, obsessive-compulsive disorder, and depression.

Boyden is a member of the MIT Media Lab and MIT’s McGovern Institute for Brain Research.

Neutrino oscillations

The Breakthrough Prize in Fundamental Physics was awarded to five research projects investigating the nature of neutrinos: Daya Bay (China); KamLAND (Japan); K2K/T2K (Japan); Sudbury Neutrino Observatory (Canada); and Super-Kamiokande (Japan). Researchers with these experiments were recognized “for the fundamental discovery of neutrino oscillations, revealing a new frontier beyond, and possibly far beyond, the standard model of particle physics.”

Formaggio and his team at MIT have been collaborating on the Sudbury Neutrino Observatory (SNO) project since 2005. Research at the observatory, 2 kilometers underground in a mine near Sudbury, Ontario, demonstrated that neutrinos change their type — or “flavor” — on their way to Earth from the sun.

Winslow has been a collaborator on KamLAND, located in a mine in Japan, since 2001. Using antineutrinos from nuclear reactors, this experiment demonstrated that the change in flavor was energy-dependent. The combination of these results solved the solar neutrino puzzle and proved that neutrinos have mass.

The MIT SNO group has participated heavily on the analysis of neutrino data, particularly during that experiment’s final measurement phase. The MIT KamLAND group is involved with the next phase, KamLAND-Zen, which is searching for a rare nuclear process that if observed, would make neutrinos their own antiparticles.

Reaching new horizons

Guth, who will receive a $100,000 prize, was honored for his “ingenious and surprising solutions to long standing open problems in symplectic geometry, Riemannian geometry, harmonic analysis, and combinatorial geometry.”

Guth’s work at MIT focuses on combinatorics, or the study of discrete structures, and how sets of lines intersect each other in space. He also works in the area of harmonic analysis, studying how sound waves interact with each other.

Guth’s father, MIT physicist Alan Guth, won the inaugural Breakthrough Prize in Fundamental Physics in 2015.

Fu will share a New Horizons in Physics Prize with two other researchers: B. Andrei Bernevig of Princeton University and Xiao-Liang Qi of Stanford University. The physicists were honored for their “outstanding contributions to condensed matter physics, especially involving the use of topology to understand new states of matter.”

Fu works on theories of topological insulators — a new class of materials whose surfaces can freely conduct electrons even though their interiors are electrical insulators — and topological superconductors. Such materials may provide insight into quantum physics and have possible applications in creating transistors based on the spin of particles rather than their charge.

Yesterday’s prize ceremony was hosted by producer/actor/director Seth MacFarlane; awards were presented by the prize sponsors and by celebrities including actors Russell Crowe, Hilary Swank, and Lily Collins. The Breakthrough Prizes were founded by Sergey Brin and Anne Wojcicki, Jack Ma and Cathy Zhang, Yuri and Julia Milner, and Mark Zuckerberg and Priscilla Chan.

“Breakthrough Prize laureates are making fundamental discoveries about the universe, life, and the mind,” Yuri Milner said. “These fields of investigation are advancing at an exponential pace, yet the biggest questions remain to be answered.”

Engineers design magnetic cell sensors

MIT engineers have designed magnetic protein nanoparticles that can be used to track cells or to monitor interactions within cells. The particles, described today in Nature Communications, are an enhanced version of a naturally occurring, weakly magnetic protein called ferritin.

“Ferritin, which is as close as biology has given us to a naturally magnetic protein nanoparticle, is really not that magnetic. That’s what this paper is addressing,” says Alan Jasanoff, an MIT professor of biological engineering and the paper’s senior author. “We used the tools of protein engineering to try to boost the magnetic characteristics of this protein.”

The new “hypermagnetic” protein nanoparticles can be produced within cells, allowing the cells to be imaged or sorted using magnetic techniques. This eliminates the need to tag cells with synthetic particles and allows the particles to sense other molecules inside cells.

The paper’s lead author is former MIT graduate student Yuri Matsumoto. Other authors are graduate student Ritchie Chen and Polina Anikeeva, an assistant professor of materials science and engineering.

Magnetic pull

Previous research has yielded synthetic magnetic particles for imaging or tracking cells, but it can be difficult to deliver these particles into the target cells.

In the new study, Jasanoff and colleagues set out to create magnetic particles that are genetically encoded. With this approach, the researchers deliver a gene for a magnetic protein into the target cells, prompting them to start producing the protein on their own.

“Rather than actually making a nanoparticle in the lab and attaching it to cells or injecting it into cells, all we have to do is introduce a gene that encodes this protein,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

As a starting point, the researchers used ferritin, which carries a supply of iron atoms that every cell needs as components of metabolic enzymes. In hopes of creating a more magnetic version of ferritin, the researchers created about 10 million variants and tested them in yeast cells.

After repeated rounds of screening, the researchers used one of the most promising candidates to create a magnetic sensor consisting of enhanced ferritin modified with a protein tag that binds with another protein called streptavidin. This allowed them to detect whether streptavidin was present in yeast cells; however, this approach could also be tailored to target other interactions.

The mutated protein appears to successfully overcome one of the key shortcomings of natural ferritin, which is that it is difficult to load with iron, says Alan Koretsky, a senior investigator at the National Institute of Neurological Disorders and Stroke.

“To be able to make more magnetic indicators for MRI would be fabulous, and this is an important step toward making that type of indicator more robust,” says Koretsky, who was not part of the research team.

Sensing cell signals

Because the engineered ferritins are genetically encoded, they can be manufactured within cells that are programmed to make them respond only under certain circumstances, such as when the cell receives some kind of external signal, when it divides, or when it differentiates into another type of cell. Researchers could track this activity using magnetic resonance imaging (MRI), potentially allowing them to observe communication between neurons, activation of immune cells, or stem cell differentiation, among other phenomena.

Such sensors could also be used to monitor the effectiveness of stem cell therapies, Jasanoff says.

“As stem cell therapies are developed, it’s going to be necessary to have noninvasive tools that enable you to measure them,” he says. Without this kind of monitoring, it would be difficult to determine what effect the treatment is having, or why it might not be working.

The researchers are now working on adapting the magnetic sensors to work in mammalian cells. They are also trying to make the engineered ferritin even more strongly magnetic.

To locate objects, brain relies on memory

Imagine you are looking for your wallet on a cluttered desk. As you scan the area, you hold in your mind a mental picture of what your wallet looks like.

MIT neuroscientists have now identified a brain region that stores this type of visual representation during a search. The researchers also found that this region sends signals to the parts of the brain that control eye movements, telling individuals where to look next.

This region, known as the ventral pre-arcuate (VPA), is critical for what the researchers call “feature attention,” which allows the brain to seek objects based on their specific properties. Most previous studies of how the brain pays attention have investigated a different type of attention known as spatial attention — that is, what happens when the brain focuses on a certain location.

“The way that people go about their lives most of the time, they don’t know where things are in advance. They’re paying attention to things based on their features,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “In the morning you’re trying to find your car keys so you can go to work. How do you do that? You don’t look at every pixel in your house. You have to use your knowledge of what your car keys look like.”

Desimone, also the Doris and Don Berkey Professor in MIT’s Department of Brain and Cognitive Sciences, is the senior author of a paper describing the findings in the Oct. 29 online edition of Neuron. The paper’s lead author is Narcisse Bichot, a research scientist at the McGovern Institute. Other authors are Matthew Heard, a former research technician, and Ellen DeGennaro, a graduate student in the Harvard-MIT Division of Health Sciences and Technology.

Visual targets

The researchers focused on the VPA in part because of its extensive connections with the brain’s frontal eye fields, which control eye movements. Located in the prefrontal cortex, the VPA has previously been linked with working memory — a cognitive ability that helps us to gather and coordinate information while performing tasks such as solving a math problem or participating in a conversation.

“There have been a lot of studies showing that this region of the cortex is heavily involved in working memory,” Bichot says. “If you have to remember something, cells in these areas are involved in holding the memory of that object for the purpose of identifying it later.”

In the new study, the researchers found that the VPA also holds what they call an “attentional template” — that is, a memory of the item being sought.

In this study, the researchers first showed monkeys a target object, such as a human face, a banana, or a butterfly. After a delay, they showed an array of objects that included the target. When the animal fixed its gaze on the target object, it received a reward. “The animals can look around as long as they want until they find what they’re looking for,” Bichot says.

As the animals performed the task, the researchers recorded electrical activity from neurons in the VPA. Each object produced a distinctive pattern of neural activity, and the neurons that encoded a representation of the target object stayed active until a match was found, prompting the neurons to fire even more.

“When the target object finally enters their receptive fields, they give enhanced responses,” Desimone says. “That’s the signal that the thing they’re looking for is actually there.”

About 20 to 30 milliseconds after the VPA cells respond to the target object, they send a signal to the frontal eye fields, which direct the eyes to lock onto the target.

When the researchers blocked VPA activity, they found that although the animals could still move their eyes around in search of the target object, they could not find it. “Presumably it’s because they’ve lost this mechanism for telling them where the likely target is,” Desimone says.

Focused attention

The researchers believe the VPA may be the equivalent in nonhuman primates of a human brain region called the inferior frontal junction (IFJ). Last year Desimone and postdoc Daniel Baldauf found that the IFJ holds onto the idea of a target object — in that study, either faces or houses — and then directs the correct part of the brain to look for the target.

The researchers are now studying how the VPA interacts with a nearby region called the VPS, which appears to be more important for tasks in which attention must be switched quickly from one object to another. They are also performing additional studies of human attention, in hopes of learning more about disorders such as Attention Deficit Hyperactivity Disorder and other attention disorders.

“There’s really an opportunity there to understand something important about the role of the prefrontal cortex in both normal behavior and in brain disorders,” Desimone says.

How the brain keeps time

Keeping track of time is critical for many tasks, such as playing the piano, swinging a tennis racket, or holding a conversation. Neuroscientists at MIT and Columbia University have now figured out how neurons in one part of the brain measure time intervals and accurately reproduce them.

The researchers found the lateral intraparietal cortex (LIP), which plays a role in sensorimotor function, represents elapsed time, as animals measure and then reproduce a time interval. They also demonstrated how the firing patterns of population of neurons in the LIP could coordinate sensory and motor aspects of timing.

LIP is likely just one node in a circuit that measures time, says Mehrdad Jazayeri, the lead author of a paper describing the work in the Oct. 8 issue of Current Biology.

“I would not conclude that the parietal cortex is the timer,” says Jazayeri, an assistant professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research. “What we are doing is discovering computational principles that explain how neurons’ firing rates evolve with time, and how that relates to the animals’ behavior in single trials. We can explain mathematically what’s going on.”

The paper’s senior author is Michael Shadlen, a professor of neuroscience and member of the Mortimer B. Zuckerman Mind Brain Behavior Institute at Columbia University.

As time goes by

Jazayeri, who joined the MIT faculty in 2013, began studying timing in the brain several years ago while a postdoc at the University of Washington. He began by testing humans’ ability to measure and reproduce time using a task called “ready, set, go.” In this experiment, the subject measures the time between two flashes (“ready” and “set”) and then presses a button (“go”) at the appropriate time — that is, after the same amount of time that separated the “ready” and “set.”

From these studies, he discovered that people do not simply measure an interval and then reproduce it. Rather, after measuring an interval they combine that measurement, which is imprecise, with their prior knowledge of what the interval could have been. This prior knowledge, which builds up as they repeat the task many times, allows people to reproduce the interval more accurately.

“When people reproduce time, they don’t seem to use a timer,” Jazayeri says. “It’s an active act of probabilistic inference that goes on.”

To find out what happens in the brain during this process, Jazayeri recorded neuronal activity in the LIP of monkeys trained to perform the same task. In these recordings, he found distinctive patterns in the measurement phase (the interval between “ready” and “set”), and the production phase (the interval between “set” and “go”).

During the measurement phase, neuron activity increases, but not linearly. Instead, the slope of activity begins as a steep curve that gradually flattens out as time goes by, until the “set” signal is given. This is key because the slope at the end of the measurement interval predicts the slope of activity in the production phase.

When the interval is short, the slope during the second phase is steep. This allows the activity to increase quickly so that the animal can produce a short interval. When the interval is longer, the slope is gentler and it takes longer to reach the time of response.

“As time goes by during the measurement, the animal knows that the interval that it has to produce is longer and therefore requires a shallower slope,” Jazayeri says.

Using this data, the researchers could correctly predict, based on the slope at the end of the measurement phase, when the animal would produce the “go” signal.

“Previous research has shown that some neurons exhibit a ramping up of their firing rate that culminates with the onset of a timed motor response. This research is exciting because it provides the first hint as to what may control the slope of this ‘neural ramping,’ specifically that the slope of the ramp may be determined by the firing rate at the beginning of the timed interval,” says Dean Buonomano, a professor of behavioral neuroscience at the University of California at Los Angeles who was not involved in the research.

“A highly distributed problem”

All cognitive and motor functions rely on time to some extent. While LIP represents time during interval reproduction, Jazayeri believes that tracking time occurs throughout brain circuits that connect subcortical structures such as the thalamus, basal ganglia, and cerebellum to the cortex.

“Timing is going to be a highly distributed problem for the brain. There’s not going to be one place in the brain that does timing,” he says.

His lab is now pursuing several questions raised by this study. In one follow-up, the researchers are investigating how animals’ behavior and brain activity change based on their expectations for how long the first interval will last.

In another experiment, they are training animals to reproduce an interval that they get to measure twice. Preliminary results suggest that during the second interval, the animals refine the measurement they took during the first interval, allowing them to perform better than when they make just one measurement.

How the brain recognizes objects

When the eyes are open, visual information flows from the retina through the optic nerve and into the brain, which assembles this raw information into objects and scenes.

Scientists have previously hypothesized that objects are distinguished in the inferior temporal (IT) cortex, which is near the end of this flow of information, also called the ventral stream. A new study from MIT neuroscientists offers evidence that this is indeed the case.

Using data from both humans and nonhuman primates, the researchers found that neuron firing patterns in the IT cortex correlate strongly with success in object-recognition tasks.

“While we knew from prior work that neuronal population activity in inferior temporal cortex was likely to underlie visual object recognition, we did not have a predictive map that could accurately link that neural activity to object perception and behavior. The results from this study demonstrate that a particular map from particular aspects of IT population activity to behavior is highly accurate over all types of objects that were tested,” says James DiCarlo, head of MIT’s Department of Brain and Cognitive Sciences, a member of the McGovern Institute for Brain Research, and senior author of the study, which appears in the Journal of Neuroscience.

The paper’s lead author is Najib Majaj, a former postdoc in DiCarlo’s lab who is now at New York University. Other authors are former MIT graduate student Ha Hong and former MIT undergraduate Ethan Solomon.

Distinguishing objects

Earlier stops along the ventral stream are believed to process basic visual elements such as brightness and orientation. More complex functions take place farther along the stream, with object recognition believed to occur in the IT cortex.

To investigate this theory, the researchers first asked human subjects to perform 64 object-recognition tasks. Some of these tasks were “trivially easy,” Majaj says, such as distinguishing an apple from a car. Others — such as discriminating between two very similar faces — were so difficult that the subjects were correct only about 50 percent of the time.

After measuring human performance on these tasks, the researchers then showed the same set of nearly 6,000 images to nonhuman primates as they recorded electrical activity in neurons of the inferior temporal cortex and another visual region known as V4.

Each of the 168 IT neurons and 128 V4 neurons fired in response to some objects but not others, creating a firing pattern that served as a distinctive signature for each object. By comparing these signatures, the researchers could analyze whether they correlated to humans’ ability to distinguish between two objects.

The researchers found that the firing patterns of IT neurons, but not V4 neurons, perfectly predicted the human performances they had seen. That is, when humans had trouble distinguishing two objects, the neural signatures for those objects were so similar as to be indistinguishable, and for pairs where humans succeeded, the patterns were very different.

“On the easy stimuli, IT did as well as humans, and on the difficult stimuli, IT also failed,” Majaj says. “We had a nice correlation between behavior and neural responses.”

The findings support the hypothesis that patterns of neural activity in the IT cortex can encode object representations detailed enough to allow the brain to distinguish different objects, the researchers say.

Nikolaus Kriegeskorte, a principal investigator at the Medical Research Council Cognition and Brain Sciences Unit in Cambridge, U.K., agrees that the study offers “crucial evidence supporting the idea that inferior temporal cortex contains the neuronal representations underlying human visual object recognition.”

“This study is exemplary for its original and rigorous method of establishing links between brain representations and human behavioral performance,” adds Kriegeskorte, who was not part of the research team.

Model performance

The researchers also tested more than 10,000 other possible models for how the brain might encode object representations. These models varied based on location in the brain, the number of neurons required, and the time window for neural activity.

Some of these models, including some that relied on V4, were eliminated because they performed better than humans on some tasks and worse on others.

“We wanted the performance of the neurons to perfectly match the performance of the humans in terms of the pattern, so the easy tasks would be easy for the neural population and the hard tasks would be hard for the neural population,” Majaj says.

The research team now aims to gather even more data to ask if this model or similar models can predict the behavioral difficulty of object recognition on each and every visual image — an even higher bar than the one tested thus far. That might require additional factors to be included in the model that were not needed in this study, and thus could expose important gaps in scientists’ current understanding of neural representations of objects.

They also plan to expand the model so they can predict responses in IT based on input from earlier parts of the visual stream.

“We can start building a cascade of computational operations that take you from an image on the retina slowly through V1, V2, V4, until we’re able to predict the population in IT,” Majaj says.

Possible new weapon against PTSD

About 8 million Americans suffer from nightmares and flashbacks to a traumatic event. This condition, known as post-traumatic stress disorder (PTSD), is particularly common among soldiers who have been in combat, though it can also be triggered by physical attack or natural disaster.

Studies have shown that trauma victims are more likely to develop PTSD if they have previously experienced chronic stress, and a new study from MIT may explain why. The researchers found that animals who underwent chronic stress prior to a traumatic experience engaged a distinctive brain pathway that encodes traumatic memories more strongly than in unstressed animals.

Blocking this type of memory formation may offer a new way to prevent PTSD, says Ki Goosens, the senior author of the study, which appears in the journal Biological Psychiatry.

“The idea is not to make people amnesic but to reduce the impact of the trauma in the brain by making the traumatic memory more like a ‘normal,’ unintrusive memory,” says Goosens, an assistant professor of neuroscience and investigator in MIT’s McGovern Institute for Brain Research.

The paper’s lead author is former MIT postdoc Michael Baratta.

Strong memories

Goosens’ lab has sought for several years to find out why chronic stress is so strongly linked with PTSD. “It’s a very potent risk factor, so it must have a profound change on the underlying biology of the brain,” she says.

To investigate this, the researchers focused on the amygdala, an almond-sized brain structure whose functions include encoding fearful memories. They found that in animals that developed PTSD symptoms following chronic stress and a traumatic event, serotonin promotes the process of memory consolidation. When the researchers blocked amygdala cells’ interactions with serotonin after trauma, the stressed animals did not develop PTSD symptoms. Blocking serotonin in unstressed animals after trauma had no effect.

“That was really surprising to us,” Baratta says. “It seems like stress is enabling a serotonergic memory consolidation process that is not present in an unstressed animal.”

Memory consolidation is the process by which short-term memories are converted into long-term memories and stored in the brain. Some memories are consolidated more strongly than others. For example, “flashbulb” memories, formed in response to a highly emotional experience, are usually much more vivid and easier to recall than typical memories.

Goosens and colleagues further discovered that chronic stress causes cells in the amygdala to express many more 5-HT2C receptors, which bind to serotonin. Then, when a traumatic experience occurs, this heightened sensitivity to serotonin causes the memory to be encoded more strongly, which Goosens believes contributes to the strong flashbacks that often occur in patients with PTSD.

“It’s strengthening the consolidation process so the memory that’s generated from a traumatic or fearful event is stronger than it would be if you don’t have this serotonergic consolidation engaged,” Baratta says.

“This study is a very nice dissection of the mechanism by which chronic stress seems to activate new pathways not seen in unstressed animals,” says Mireya Nadal-Vicens, medical director of the Center for Anxiety and Traumatic Stress Disorders at Massachusetts General Hospital, who was not part of the research team.

Drug intervention

This memory consolidation process can take hours to days to complete, but once a memory is consolidated, it is very difficult to erase. However, the findings suggest that it may be possible to either prevent traumatic memories from forming so strongly in the first place, or to weaken them after consolidation, using drugs that interfere with serotonin.

“The consolidation process gives us a window in which we can possibly intervene and prevent the development of PTSD. If you give a drug or intervention that can block fear memory consolidation, that’s a great way to think about treating PTSD,” Goosens says. “Such an intervention won’t cause people to forget the experience of the trauma, but they might not have the intrusive memory that is ultimately going to cause them to have nightmares or be afraid of things that are similar to the traumatic experience.”

The Food and Drug Administration has already approved a drug called agomelatine that blocks this type of serotonin receptor and is used as an antidepressant.

Such a drug might also be useful to treat patients who already suffer from PTSD. These patients’ traumatic memories are already consolidated, but some research has shown that when memories are recalled, there is a window of time during which they can be altered and reconsolidated. It may be possible to weaken these memories by using serotonin-blocking drugs to interfere with the reconsolidation process, says Goosens, who plans to begin testing that possibility in animals.

The findings also suggest that the antidepressant Prozac and other selective serotonin reuptake inhibitors (SSRIs), which are commonly given to PTSD patients, likely do not help and may actually worsen their symptoms. Prozac enhances the effects of serotonin by prolonging its exposure to brain cells. While this often helps those suffering from depression, “There’s no biological evidence to support the use of SSRIs for PTSD,” Goosens says.

“The consolidation of traumatic memories requires this serotonergic cascade and we want to block it, not enhance it,” she adds. “This study suggests we should rethink the use of SSRIs in PTSD and also be very careful about how they are used, particularly when somebody is recently traumatized and their memories are still being consolidated, or when a patient is undergoing cognitive behavior therapy where they’re recalling the memory of the trauma and the memory is going through the process of reconsolidation.”

 

Young brains can take on new functions

In 2011, MIT neuroscientist Rebecca Saxe and colleagues reported that in blind adults, brain regions normally dedicated to vision processing instead participate in language tasks such as speech and comprehension. Now, in a study of blind children, Saxe’s lab has found that this transformation occurs very early in life, before the age of 4.

The study, appearing in the Journal of Neuroscience, suggests that the brains of young children are highly plastic, meaning that regions usually specialized for one task can adapt to new and very different roles. The findings also help to define the extent to which this type of remodeling is possible.

“In some circumstances, patches of cortex appear to take on other roles than the ones that they most typically have,” says Saxe, a professor of cognitive neuroscience and an associate member of MIT’s McGovern Institute for Brain Research. “One question that arises from that is, ‘What is the range of possible differences between what a cortical region typically does and what it could possibly do?’”

The paper’s lead author is Marina Bedny, a former MIT postdoc who is now an assistant professor at Johns Hopkins University. MIT graduate student Hilary Richardson is also an author of the paper.

Brain reorganization

The brain’s cortex, which carries out high-level functions such as thought, sensory processing, and initiation of movement, is made of sheets of neurons, each dedicated to a certain role. Within the visual system, located primarily in the occipital lobe, most neurons are tuned to respond only to a very specific aspect of visual input, such as brightness, orientation, or location in the field of view.

“There’s this big fundamental question, which is, ‘How did that organization get there, and to what degree can it be changed?’” Saxe says.

One possibility is that neurons in each patch of cortex have evolved to carry out specific roles, and can do nothing else. At the other extreme is the possibility that any patch of cortex can be recruited to perform any kind of computational task.

“The reality is somewhere in between those two,” Saxe says.

To study the extent to which cortex can change its function, scientists have focused on the visual cortex because they can learn a great deal about it by studying people who were born blind.

A landmark 1996 study of blind people found that their visual regions could participate in a nonvisual task — reading Braille. Some scientists theorized that perhaps the visual cortex is recruited for reading Braille because like vision, it requires discriminating very fine-grained patterns.

However, in their 2011 study, Saxe and Bedny found that the visual cortex of blind adults also responds to spoken language. “That was weird, because processing auditory language doesn’t require the kind of fine-grained spatial discrimination that Braille does,” Saxe says.

She and Bedny hypothesized that auditory language processing may develop in the occipital cortex by piggybacking onto the Braille-reading function. To test that idea, they began studying congenitally blind children, including some who had not learned Braille yet. They reasoned that if their hypothesis were correct, the occipital lobe would be gradually recruited for language processing as the children learned Braille.

However, they found that this was not the case. Instead, children as young as 4 already have language-related activity in the occipital lobe.

“The response of occipital cortex to language is not affected by Braille acquisition,” Saxe says. “It happens before Braille and it doesn’t increase with Braille.”

Language-related occipital activity was similar among all of the 19 blind children, who ranged in age from 4 to 17, suggesting that the entire process of occipital recruitment for language processing takes place before the age of 4, Saxe says. Bedny and Saxe have previously shown that this transition occurs only in people blind from birth, suggesting that there is an early critical period after which the cortex loses much of its plasticity.

The new study represents a huge step forward in understanding how the occipital cortex can take on new functions, says Ione Fine, an associate professor of psychology at the University of Washington.

“One thing that has been missing is an understanding of the developmental timeline,” says Fine, who was not involved in the research. “The insight here is that you get plasticity for language separate from plasticity for Braille and separate from plasticity for auditory processing.”

Language skills

The findings raise the question of how the extra language-processing centers in the occipital lobe affect language skills.

“This is a question we’ve always wondered about,” Saxe says. “Does it mean you’re better at those functions because you have more of your cortex doing it? Does it mean you’re more resilient in those functions because now you have more redundancy in your mechanism for doing it? You could even imagine the opposite: Maybe you’re less good at those functions because they’re distributed in an inefficient or atypical way.”

There are hints that the occipital lobe’s contribution to language-related functions “takes the pressure off the frontal cortex,” where language processing normally occurs, Saxe says. Other researchers have shown that suppressing left frontal cortex activity with transcranial magnetic stimulation interferes with language function in sighted people, but not in the congenitally blind.

This leads to the intriguing prediction that a congenitally blind person who suffers a stroke in the left frontal cortex may retain much more language ability than a sighted person would, Saxe says, although that hypothesis has not been tested.

Saxe’s lab is now studying children under 4 to try to learn more about how cortical functions develop early in life, while Bedny is investigating whether the occipital lobe participates in functions other than language in congenitally blind people.

How we make emotional decisions

Some decisions arouse far more anxiety than others. Among the most anxiety-provoking are those that involve options with both positive and negative elements, such choosing to take a higher-paying job in a city far from family and friends, versus choosing to stay put with less pay.

MIT researchers have now identified a neural circuit that appears to underlie decision-making in this type of situation, which is known as approach-avoidance conflict. The findings could help researchers to discover new ways to treat psychiatric disorders that feature impaired decision-making, such as depression, schizophrenia, and borderline personality disorder.

“In order to create a treatment for these types of disorders, we need to understand how the decision-making process is working,” says Alexander Friedman, a research scientist at MIT’s McGovern Institute for Brain Research and the lead author of a paper describing the findings in the May 28 issue of Cell.

Friedman and colleagues also demonstrated the first step toward developing possible therapies for these disorders: By manipulating this circuit in rodents, they were able to transform a preference for lower-risk, lower-payoff choices to a preference for bigger payoffs despite their bigger costs.

The paper’s senior author is Ann Graybiel, an MIT Institute Professor and member of the McGovern Institute. Other authors are postdoc Daigo Homma, research scientists Leif Gibb and Ken-ichi Amemori, undergraduates Samuel Rubin and Adam Hood, and technical assistant Michael Riad.

Making hard choices

The new study grew out of an effort to figure out the role of striosomes — clusters of cells distributed through the the striatum, a large brain region involved in coordinating movement and emotion and implicated in some human disorders. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

Previous studies from Graybiel’s lab identified regions of the brain’s prefrontal cortex that project to striosomes. These regions have been implicated in processing emotions, so the researchers suspected that this circuit might also be related to emotion.

To test this idea, the researchers studied mice as they performed five different types of behavioral tasks, including an approach-avoidance scenario. In that situation, rats running a maze had to choose between one option that included strong chocolate, which they like, and bright light, which they don’t, and an option with dimmer light but weaker chocolate.

When humans are forced to make these kinds of cost-benefit decisions, they usually experience anxiety, which influences the choices they make. “This type of task is potentially very relevant to anxiety disorders,” Gibb says. “If we could learn more about this circuitry, maybe we could help people with those disorders.”

The researchers also tested rats in four other scenarios in which the choices were easier and less fraught with anxiety.

“By comparing performance in these five tasks, we could look at cost-benefit decision-making versus other types of decision-making, allowing us to reach the conclusion that cost-benefit decision-making is unique,” Friedman says.

Using optogenetics, which allowed them to turn cortical input to the striosomes on or off by shining light on the cortical cells, the researchers found that the circuit connecting the cortex to the striosomes plays a causal role in influencing decisions in the approach-avoidance task, but none at all in other types of decision-making.

When the researchers shut off input to the striosomes from the cortex, they found that the rats began choosing the high-risk, high-reward option as much as 20 percent more often than they had previously chosen it. If the researchers stimulated input to the striosomes, the rats began choosing the high-cost, high-reward option less often.

Paul Glimcher, a professor of physiology and neuroscience at New York University, describes the study as a “masterpiece” and says he is particularly impressed by the use of a new technology, optogenetics, to solve a longstanding mystery. The study also opens up the possibility of studying striosome function in other types of decision-making, he adds.

“This cracks the 20-year puzzle that [Graybiel] wrote — what do the striosomes do?” says Glimcher, who was not part of the research team. “In 10 years we will have a much more complete picture, of which this paper is the foundational stone. She has demonstrated that we can answer this question, and answered it in one area. A lot of labs will now take this up and resolve it in other areas.”

Emotional gatekeeper

The findings suggest that the striatum, and the striosomes in particular, may act as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to react, the researchers say.

That gatekeeper circuit also appears to include a part of the midbrain called the substantia nigra, which has dopamine-containing cells that play an important role in motivation and movement. The researchers believe that when activated by input from the striosomes, these substantia nigra cells produce a long-term effect on an animal or human patient’s decision-making attitudes.

“We would so like to find a way to use these findings to relieve anxiety disorder, and other disorders in which mood and emotion are affected,” Graybiel says. “That kind of work has a real priority to it.”

In addition to pursuing possible treatments for anxiety disorders, the researchers are now trying to better understand the role of the dopamine-containing substantia nigra cells in this circuit, which plays a critical role in Parkinson’s disease and may also be involved in related disorders.

The research was funded by the National Institute of Mental Health, the CHDI Foundation, the Defense Advanced Research Projects Agency, the U.S. Army Research Office, the Bachmann-Strauss Dystonia and Parkinson Foundation, and the William N. and Bernice E. Bumpus Foundation.

Study links brain anatomy, academic achievement, and family income

Many years of research have shown that for students from lower-income families, standardized test scores and other measures of academic success tend to lag behind those of wealthier students.

A new study led by researchers at MIT and Harvard University offers another dimension to this so-called “achievement gap”: After imaging the brains of high- and low-income students, they found that the higher-income students had thicker brain cortex in areas associated with visual perception and knowledge accumulation. Furthermore, these differences also correlated with one measure of academic achievement — performance on standardized tests.

“Just as you would expect, there’s a real cost to not living in a supportive environment. We can see it not only in test scores, in educational attainment, but within the brains of these children,” says MIT’s John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, professor of brain and cognitive sciences, and one of the study’s authors. “To me, it’s a call to action. You want to boost the opportunities for those for whom it doesn’t come easily in their environment.”

This study did not explore possible reasons for these differences in brain anatomy. However, previous studies have shown that lower-income students are more likely to suffer from stress in early childhood, have more limited access to educational resources, and receive less exposure to spoken language early in life. These factors have all been linked to lower academic achievement.

In recent years, the achievement gap in the United States between high- and low-income students has widened, even as gaps along lines of race and ethnicity have narrowed, says Martin West, an associate professor of education at the Harvard Graduate School of Education and an author of the new study.

“The gap in student achievement, as measured by test scores between low-income and high-income students, is a pervasive and longstanding phenomenon in American education, and indeed in education systems around the world,” he says. “There’s a lot of interest among educators and policymakers in trying to understand the sources of those achievement gaps, but even more interest in possible strategies to address them.”

Allyson Mackey, a postdoc at MIT’s McGovern Institute for Brain Research, is the lead author of the paper, which appears the journal Psychological Science. Other authors are postdoc Amy Finn; graduate student Julia Leonard; Drew Jacoby-Senghor, a postdoc at Columbia Business School; and Christopher Gabrieli, chair of the nonprofit Transforming Education.

Explaining the gap

The study included 58 students — 23 from lower-income families and 35 from higher-income families, all aged 12 or 13. Low-income students were defined as those who qualify for a free or reduced-price school lunch.

The researchers compared students’ scores on the Massachusetts Comprehensive Assessment System (MCAS) with brain scans of a region known as the cortex, which is key to functions such as thought, language, sensory perception, and motor command.

Using magnetic resonance imaging (MRI), they discovered differences in the thickness of parts of the cortex in the temporal and occipital lobes, whose primary roles are in vision and storing knowledge. Those differences correlated to differences in both test scores and family income. In fact, differences in cortical thickness in these brain regions could explain as much as 44 percent of the income achievement gap found in this study.

Previous studies have also shown brain anatomy differences associated with income, but did not link those differences to academic achievement.

“A number of labs have reported differences in children’s brain structures as a function of family income, but this is the first to relate that to variation in academic achievement,” says Kimberly Noble, an assistant professor of pediatrics at Columbia University who was not part of the research team.

In most other measures of brain anatomy, the researchers found no significant differences. The amount of white matter — the bundles of axons that connect different parts of the brain — did not differ, nor did the overall surface area of the brain cortex.

The researchers point out that the structural differences they did find are not necessarily permanent. “There’s so much strong evidence that brains are highly plastic,” says Gabrieli, who is also a member of the McGovern Institute. “Our findings don’t mean that further educational support, home support, all those things, couldn’t make big differences.”

In a follow-up study, the researchers hope to learn more about what types of educational programs might help to close the achievement gap, and if possible, investigate whether these interventions also influence brain anatomy.

“Over the past decade we’ve been able to identify a growing number of educational interventions that have managed to have notable impacts on students’ academic achievement as measured by standardized tests,” West says. “What we don’t know anything about is the extent to which those interventions — whether it be attending a very high-performing charter school, or being assigned to a particularly effective teacher, or being exposed to a high-quality curricular program — improves test scores by altering some of the differences in brain structure that we’ve documented, or whether they had those effects by other means.”

The research was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

Tasting light

Human taste receptors are specialized to distinguish several distinct compounds: sugars taste sweet, salts taste salty, and acidic compounds taste sour. Now a new study from MIT finds that the worm Caenorhabditis elegans has taken its powers of detection a step further: The worm can taste hydrogen peroxide, triggering it to stop eating the potentially dangerous substance.

Being able to taste hydrogen peroxide allows the worm to detect light, which generates hydrogen peroxide and other harmful reactive oxygen compounds both within the worm and in its environment.

“This is potentially a brand-new mechanism of sensing light,” says Nikhil Bhatla, the lead author of the paper and a postdoc in MIT’s Department of Biology. “All of the mechanisms of light detection we know about involve a chromophore — a small molecule that absorbs a photon and changes shape or transfers electrons. This seems to be the first example of behavioral light-sensing that requires the generation of a chemical in the process of detecting the light.”

Bhatla and Robert Horvitz, the David H. Koch Professor of Biology, describe the new hydrogen peroxide taste receptors in the Jan. 29 online issue of the journal Neuron.

Though it is not yet known whether there is a human equivalent of this system, the researchers say their discovery lends support to the idea that there may be human taste receptors dedicated to flavors other than the five canonical ones — sweet, salty, bitter, sour, and savory. It also opens the possibility that humans might be able to sense light in ways that are fundamentally different from those known to act in vision.

“I think we have underestimated our biological abilities,” Bhatla says. “Aside from those five, there are other flavors, such as burnt. How do we taste something as burnt? Or what about spicy, or metallic, or smoky? There’s this whole new area that hasn’t really been explored.”

Beyond bitter and sweet

One of the major functions of the sense of taste is to determine whether something is safe, or advantageous, to eat. For humans and other animals, bitterness often serves as a warning of poison, while sweetness can help to identify foods that are rich in energy.

For worms, hydrogen peroxide can be harmful because it can cause extensive cellular trauma, including damaging proteins, DNA, and other molecules in the body. In fact, certain strains of bacteria produce hydrogen peroxide that can kill C. elegans after being eaten. Worms might also ingest hydrogen peroxide from the soil where they live.

Bhatla and Horvitz found that worms stop eating both when they taste hydrogen peroxide and when light shines on them — especially high-energy light, such as violet or ultraviolet. The authors found the exact same feeding response when worms were exposed to either hydrogen peroxide or light, which suggested to them that the same mechanism might be controlling responses to both stimuli.

Worms are known to be averse to light: Previous research by others has shown that they flee when light shines on them. Bhatla and Horvitz have now found that this escape response, like the feeding response to light, is likely caused by light’s generation of chemicals such as hydrogen peroxide.

The C. elegans worm has a very simple and thoroughly mapped nervous system consisting of 302 neurons, 20 of which are located in the pharynx, the feeding organ that ingests and grinds food. Bhatla found that one pair of pharyngeal neurons, known as the I2 neurons, controls the animal’s response to both light and hydrogen peroxide. A particular molecular receptor in that neuron, gustatory receptor 3 (GUR-3), and a molecularly similar receptor found in other neurons (LITE-1) are critical to the response. However, each receptor appears to function in a slightly different way.

GUR-3 detects hydrogen peroxide, whether it is found naturally in the environment or generated by light. There are many GUR-3 receptors in the I2 neuron, and through a mechanism that remains unknown, hydrogen peroxide stimulation of GUR-3 causes the pharynx to stop grinding. Another molecule called peroxiredoxin, an antioxidant, appears to help GUR-3 detect hydrogen peroxide.

While the GUR-3 receptor responds much more strongly to hydrogen peroxide than to light, the LITE-1 receptor is much more sensitive to light than to hydrogen peroxide. LITE-1 has previously been implicated in detecting light, but until now, it has been a mystery how a taste receptor could respond to light. The new study suggests that like GUR-3, LITE-1 indirectly senses light by detecting reactive oxygen compounds generated by light — including, but not limited to, hydrogen peroxide.

Kenneth Miller of the Oklahoma Medical Research Foundation published a paper in 2008 describing LITE-1 and hypothesizing that it might work by detecting a chemical product of light interaction. “This paper goes one step beyond that and identifies molecules that LITE-1 could be sensing to identify the presence of light,” says Miller, who was not part of the new study. “I thought it was a fascinating look at the complex gustatory sensory mechanism for molecules like hydrogen peroxide.”

Not found in humans

The molecular family of receptors that includes GUR-3 and LITE-1 is specific to invertebrates, and is not found in humans. However, peroxiredoxin is found in humans, particularly in the eye, so the researchers suspect that peroxiredoxin might play a role in detecting reactive oxygen species generated by light in the eye.

The researchers are now trying to figure out the exact mechanism of hydrogen peroxide detection: For example, how exactly do these gustatory receptors detect reactive oxygen compounds? The researchers are also working to identify the neural circuit diagram that defines how the I2 neurons interact with other neurons to control the worms’ feeding behavior. Such neural circuit diagrams should provide insight into how the brains of worms, and people, generate behavior.

The research was funded by the National Science Foundation, the National Institutes of Health, and the Howard Hughes Medical Institute.