A radiation-free approach to imaging molecules in the brain

Scientists hoping to get a glimpse of molecules that control brain activity have devised a new probe that allows them to image these molecules without using any chemical or radioactive labels.

Currently the gold standard approach to imaging molecules in the brain is to tag them with radioactive probes. However, these probes offer low resolution and they can’t easily be used to watch dynamic events, says Alan Jasanoff, an MIT professor of biological engineering.

Jasanoff and his colleagues have developed new sensors consisting of proteins designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” Jasanoff says. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

In a paper appearing in the Dec. 2 issue of Nature Communications, Jasanoff and his colleagues used these probes to detect enzymes called proteases, but their ultimate goal is to use them to monitor the activity of neurotransmitters, which act as chemical messengers between brain cells.

The paper’s lead authors are postdoc Mitul Desai and former MIT graduate student Adrian Slusarczyk. Recent MIT graduate Ashley Chapin and postdoc Mariya Barch are also authors of the paper.

Indirect imaging

To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

“These are molecules that aren’t visualized directly, but instead produce changes in the body that can then be visualized very effectively by imaging,” Jasanoff says.

Proteases are sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease. However, Jasanoff’s lab used them in this study mainly to demonstrate the validity their approach. Now, they are working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, that are critical to cognition and processing emotions.

To do that, the researchers plan to modify the cages surrounding the CGRP so that they can be removed by interaction with a particular neurotransmitter.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

Tracking genes

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.

Finding a way in

Our perception of the world arises within the brain, based on sensory information that is sometimes ambiguous, allowing more than one interpretation. Familiar demonstrations of this point include the famous Necker cube and the “duck-rabbit” drawing (right) in which two different interpretations flip back and forth over time.

Another example is binocular rivalry, in which the two eyes are presented with different images that are perceived in alternation. Several years ago, this phenomenon caught the eye of Caroline Robertson, who is now a Harvard Fellow working in the lab of McGovern Investigator Nancy Kanwisher. Back when she was a graduate student at Cambridge University, Robertson realized that binocular rivalry might be used to probe the basis of autism, among the most mysterious of all brain disorders.

Robertson’s idea was based on the hypothesis that autism involves an imbalance between excitation and inhibition within the brain. Although widely supported by indirect evidence, this has been very difficult to test directly in human patients. Robertson realized that binocular rivalry might provide a way to perform such a test. The perceptual switches that occur during rivalry are thought to involve competition between different groups of neurons in the visual cortex, each group reinforcing its own interpretation via excitatory connections while suppressing the alternative interpretation through inhibitory connections. Thus, if the balance is altered in the brains of people with autism, the frequency of switching might also be different, providing a simple and easily measurable marker of the disease state.

To test this idea, Robertson recruited adults with and without autism, and presented them with two distinct and differently colored images in each eye. As expected, their perceptions switched back and forth between the two images, with short periods of mixed perception in between. This was true for both groups, but when she measured the timing of these switches, Robertson found that individuals with autism do indeed see the world in a measurably different way than people without the disorder. Individuals with autism cycle between the left and right images more slowly, with the intervening periods of mixed perception lasting longer than in people without autism. The more severe their autistic symptoms, as determined by a standard clinical behavioral evaluation, the greater the difference.

Robertson had found a marker for autism that is more objective than current methods that involve one person assessing the behavior of another. The measure is immediate and relies on brain activity that happens automatically, without people thinking about it. “Sensation is a very simple place to probe,” she says.

A top-down approach

When she arrived in Kanwisher’s lab, Robertson wanted to use brain imaging to probe the basis for the perceptual phenomenon that she had discovered. With Kanwisher’s encouragement, she began by repeating the behavioral experiment with a new group of subjects, to check that her previous results were not a fluke. Having confirmed that the finding was real, she then scanned the subjects using an imaging method called Magnetic Resonance Spectroscopy (MRS), in which an MRI scanner is reprogrammed to measure concentrations of neurotransmitters and other chemicals in the brain. Kanwisher had never used MRS before, but when Robertson proposed the experiment, she was happy to try it. “Nancy’s the kind of mentor who could support the idea of using a new technique and guide me to approach it rigorously,” says Robertson.

For each of her subjects, Robertson scanned their brains to measure the amounts of two key neurotransmitters, glutamate, which is the main excitatory transmitter in the brain, and GABA, which is the main source of inhibition. When she compared the brain chemistry to the behavioral results in the binocular rivalry task, she saw something intriguing and unexpected. In people without autism, the amount of GABA in the visual cortex was correlated with the strength of the suppression, consistent with the idea that GABA enables signals from one eye to inhibit those from the other eye. But surprisingly, there was no such correlation in the autistic individuals—suggesting that GABA was somehow unable to exert its normal suppressive effect. It isn’t yet clear exactly what is going wrong in the brains of these subjects, but it’s an early flag, says Robertson. “The next step is figuring out which part of the pathway is disrupted.”

A bottom-up approach

Robertson’s approach starts from the top-down, working backward from a measurable behavior to look for brain differences, but it isn’t the only way in. Another approach is to start with genes that are linked to autism in humans, and to understand how they affect neurons and brain circuits. This is the bottom-up approach of McGovern Investigator Guoping Feng, who studies a gene called Shank3 that codes for a protein that helps build synapses, the connections through which neurons send signals to each other. Several years ago Feng knocked out Shank3 in mice, and found that the mice exhibited behaviors reminiscent of human autism, including repetitive grooming, anxiety, and impaired social interaction and motor control.

These earlier studies involved a variety of different mutations that disabled the Shank3 gene. But when postdoc Yang Zhou joined Feng’s lab, he brought a new perspective. Zhou had come from a medical background and wanted to do an experiment more directly connected to human disease. So he suggested making a mouse version of a Shank3 mutation seen in human patients, and testing its effects.

Zhou’s experiment would require precise editing of the mouse Shank3 gene, previously a difficult and time-consuming task. But help was at hand, in the form of a collaboration with McGovern Investigator Feng Zhang, a pioneer in the development of genome-editing methods.

Using Zhang’s techniques, Zhou was able to generate mice with two different mutations: one that had been linked to human autism, and another that had been discovered in a few patients with schizophrenia.

The researchers found that mice with the autism-related mutation exhibited behavioral changes at a young age that paralleled behaviors seen in children with autism. They also found early changes in synapses within a brain region called the striatum. In contrast, mice with the schizophrenia-related gene appeared normal until adolescence, and then began to exhibit changes in behavior and also changes in the prefrontal cortex, a brain region that is implicated in human schizophrenia. “The consequences of the two different Shank3 mutations were quite different in certain aspects, which was very surprising to us,” says Zhou.

The fact that different mutations in just one gene can produce such different results illustrates exactly how complex these neuropsychiatric disorders can be. “Not only do we need to study different genes, but we also have to understand different mutations and which brain regions have what defects,” says Feng, who received funding from the Poitras Center for Affective Disorders research and the Simons Center for the Social Brain. Robertson and Kanwisher were also supported by the Simons Center.

Surprising plasticity

The brain alterations that lead to autism are thought to arise early in development, long before the condition is diagnosed, raising concerns that it may be difficult to reverse the effects once the damage is done. With the Shank3 knockout mice, Feng and his team were able to approach this question in a new way, asking what would happen if the missing gene were to be restored in adulthood.

To find the answer, lab members Yuan Mei and Patricia Monteiro, along with Zhou, studied another strain of mice, in which the Shank3 gene was switched off but could be reactivated at any time by adding a drug to their diet. When adult mice were tested six weeks after the gene was switched back on, they no longer showed repetitive grooming behaviors, and they also showed normal levels of social interaction with other mice, despite having grown up without a functioning Shank3 gene. Examination of their brains confirmed that many of the synaptic alterations were also rescued when the gene was restored.

Not every symptom was reversed by this treatment; even after six weeks or more of restored Shank3 expression, the mice continued to show heightened anxiety and impaired motor control. But even these deficits could be prevented if the Shank3 gene was restored earlier in life, soon after birth.

The results are encouraging because they indicate a surprising degree of brain plasticity, persisting into adulthood. If the results can be extrapolated to human patients, they suggest that even in adulthood, autism may be at least partially reversible if the right treatment can be found. “This shows us the possibility,” says Zhou. “If we could somehow put back the gene in patients who are missing it, it could help improve their life quality.”

Converging paths

Robertson and Feng are approaching the challenge of autism from different starting points, but already there are signs of convergence. Feng is finding early signs that his Shank3 mutant mice may have an altered balance of inhibitory and excitatory circuits, consistent with what Robertson and Kanwisher have found in humans.

Feng is continuing to study these mice, and he also hopes to study the effects of a similar mutation in non-human primates, whose brains and behaviors are more similar to those of humans than rodents. Robertson, meanwhile, is planning to establish a version of the binocular rivalry test in animal models, where it is possible to alter the balance between inhibition and excitation experimentally (for example, via a genetic mutation or a drug treatment). If this leads to changes in binocular rivalry, it would strongly support the link to the perceptual changes seen in humans.

One challenge, says Robertson, will be to develop new methods to measure the perceptions of mice and other animals. “The mice can’t tell us what they are seeing,” she says. “But it would also be useful in humans, because it would allow us to study young children and patients who are non-verbal.”

A multi-pronged approach

The imbalance hypothesis is a promising lead, but no single explanation is likely to encompass all of autism, according to McGovern director Bob Desimone. “Autism is a notoriously heterogeneous condition,” he explains. “We need to try multiple approaches in order to maximize the chance of success.”

McGovern researchers are doing exactly that, with projects underway that range from scanning children to developing new molecular and microscopic methods for examining brain changes in animal disease models. Although genetic studies provide some of the strongest clues, Desimone notes that there is also evidence for environmental contributions to autism and other brain disorders. “One that’s especially interesting to us is a maternal infection and inflammation, which in mice at least can affect brain development in ways we’re only beginning to understand.”

The ultimate goal, says Desimone, is to connect the dots and to understand how these diverse human risk factors affect brain function. “Ultimately, we want to know what these different pathways have in common,” he says. “Then we can come up with rational strategies for the development of new treatments.”

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.”

Robertson is the lead author of the study, which appears in the Sept. 8 issue of the journal Current Biology. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s lead author.

Building memories

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson says.

The researchers explored this hypothesis using immersive virtual reality headsets, which allowed them to show people many different panoramic scenes. In this study, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighborhood. The images were presented in two ways: Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner. Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

Priming the brain

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.

“After you have seen a series of views of a panoramic environment, you have explicitly linked them in memory to a known place,” Robertson says. “They also evoke overlapping visual representations in certain regions of the brain, which is implicitly guiding your upcoming perceptual experience.”

The research was funded by the National Science Foundation Science and Technology Center for Brains, Minds, and Machines; and the Harvard Milton Fund.

Study finds brain connections key to learning

A new study from MIT reveals that a brain region dedicated to reading has connections for that skill even before children learn to read.

By scanning the brains of children before and after they learned to read, the researchers found that they could predict the precise location where each child’s visual word form area (VWFA) would develop, based on the connections of that region to other parts of the brain.

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago, which is not enough time for evolution to have reshaped the brain for that specific task. The new study suggests that the VWFA, located in an area that receives visual input, has pre-existing connections to brain regions associated with language processing, making it ideally suited to become devoted to reading.

“Long-range connections that allow this region to talk to other areas of the brain seem to drive function,” says Zeynep Saygin, a postdoc at MIT’s McGovern Institute for Brain Research. “As far as we can tell, within this larger fusiform region of the brain, only the reading area has these particular sets of connections, and that’s how it’s distinguished from adjacent cortex.”

Saygin is the lead author of the study, which appears in the Aug. 8 issue of Nature Neuroscience. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s senior author.

Specialized for reading

The brain’s cortex, where most cognitive functions occur, has areas specialized for reading as well as face recognition, language comprehension, and many other tasks. Neuroscientists have hypothesized that the locations of these functions may be determined by prewired connections to other parts of the brain, but they have had few good opportunities to test this hypothesis.

Reading presents a unique opportunity to study this question because it is not learned right away, giving scientists a chance to examine the brain region that will become the VWFA before children know how to read. This region, located in the fusiform gyrus, at the base of the brain, is responsible for recognizing strings of letters.

Children participating in the study were scanned twice — at 5 years of age, before learning to read, and at 8 years, after they learned to read. In the scans at age 8, the researchers precisely defined the VWFA for each child by using functional magnetic resonance imaging (fMRI) to measure brain activity as the children read. They also used a technique called diffusion-weighted imaging to trace the connections between the VWFA and other parts of the brain.

The researchers saw no indication from fMRI scans that the VWFA was responding to words at age 5. However, the region that would become the VWFA was already different from adjacent cortex in its connectivity patterns. These patterns were so distinctive that they could be used to accurately predict the precise location where each child’s VWFA would later develop.

Although the area that will become the VWFA does not respond preferentially to letters at age 5, Saygin says it is likely that the region is involved in some kind of high-level object recognition before it gets taken over for word recognition as a child learns to read. Still unknown is how and why the brain forms those connections early in life.

Pre-existing connections

Kanwisher and Saygin have found that the VWFA is connected to language regions of the brain in adults, but the new findings in children offer strong evidence that those connections exist before reading is learned, and are not the result of learning to read, according to Stanislas Dehaene, a professor and the chair of experimental cognitive psychology at the College de France, who wrote a commentary on the paper for Nature Neuroscience.

“To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read,” wrote Dehaene, who was not involved in the study. “Although many children, at the age of 5, did not have a VWFA yet, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.”

The MIT team now plans to study whether this kind of brain imaging could help identify children who are at risk of developing dyslexia and other reading difficulties.

“It’s really powerful to be able to predict functional development three years ahead of time,” Saygin says. “This could be a way to use neuroimaging to try to actually help individuals even before any problems occur.”

Diagnosing depression before it starts

A new brain imaging study from MIT and Harvard Medical School may lead to a screen that could identify children at high risk of developing depression later in life.

In the study, the researchers found distinctive brain differences in children known to be at high risk because of family history of depression. The finding suggests that this type of scan could be used to identify children whose risk was previously unknown, allowing them to undergo treatment before developing depression, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“We’d like to develop the tools to be able to identify people at true risk, independent of why they got there, with the ultimate goal of maybe intervening early and not waiting for depression to strike the person,” says Gabrieli, an author of the study, which appears in the journal Biological Psychiatry.

Early intervention is important because once a person suffers from an episode of depression, they become more likely to have another. “If you can avoid that first bout, maybe it would put the person on a different trajectory,” says Gabrieli, who is a member of MIT’s McGovern Institute for Brain Research.

The paper’s lead author is McGovern Institute postdoc Xiaoqian Chai, and the senior author is Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute.

Distinctive patterns

The study also helps to answer a key question about the brain structures of depressed patients. Previous imaging studies have revealed two brain regions that often show abnormal activity in these patients: the subgenual anterior cingulate cortex (sgACC) and the amygdala. However, it was unclear if those differences caused depression or if the brain changed as the result of a depressive episode.

To address that issue, the researchers decided to scan brains of children who were not depressed, according to their scores on a commonly used diagnostic questionnaire, but had a parent who had suffered from the disorder. Such children are three times more likely to become depressed later in life, usually between the ages of 15 and 30.

Gabrieli and colleagues studied 27 high-risk children, ranging in age from eight to 14, and compared them with a group of 16 children with no known family history of depression.

Using functional magnetic resonance imaging (fMRI), the researchers measured synchronization of activity between different brain regions. Synchronization patterns that emerge when a person is not performing any particular task allow scientists to determine which regions naturally communicate with each other.

The researchers identified several distinctive patterns in the at-risk children. The strongest of these links was between the sgACC and the default mode network — a set of brain regions that is most active when the mind is unfocused. This abnormally high synchronization has also been seen in the brains of depressed adults.

The researchers also found hyperactive connections between the amygdala, which is important for processing emotion, and the inferior frontal gyrus, which is involved in language processing. Within areas of the frontal and parietal cortex, which are important for thinking and decision-making, they found lower than normal connectivity.

Cause and effect

These patterns are strikingly similar to those found in depressed adults, suggesting that these differences arise before depression occurs and may contribute to the development of the disorder, says Ian Gotlib, a professor of psychology at Stanford University.

“The findings are consistent with an explanation that this is contributing to the onset of the disease,” says Gotlib, who was not involved in the research. “The patterns are there before the depressive episode and are not due to the disorder.”

The MIT team is continuing to track the at-risk children and plans to investigate whether early treatment might prevent episodes of depression. They also hope to study how some children who are at high risk manage to avoid the disorder without treatment.

Other authors of the paper are Dina Hirshfeld-Becker, an associate professor of psychiatry at Harvard Medical School; Joseph Biederman, director of pediatric psychopharmacology at Massachusetts General Hospital (MGH); Mai Uchida, an assistant professor of psychiatry at Harvard Medical School; former MIT postdoc Oliver Doehrmann; MIT graduate student Julia Leonard; John Salvatore, a former McGovern technical assistant; MGH research assistants Tara Kenworthy and Elana Kagan; Harvard Medical School postdoc Ariel Brown; and former MIT technical assistant Carlo de los Angeles.

Study finds altered brain chemistry in people with autism

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Music in the brain

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.

 

Engineers design magnetic cell sensors

MIT engineers have designed magnetic protein nanoparticles that can be used to track cells or to monitor interactions within cells. The particles, described today in Nature Communications, are an enhanced version of a naturally occurring, weakly magnetic protein called ferritin.

“Ferritin, which is as close as biology has given us to a naturally magnetic protein nanoparticle, is really not that magnetic. That’s what this paper is addressing,” says Alan Jasanoff, an MIT professor of biological engineering and the paper’s senior author. “We used the tools of protein engineering to try to boost the magnetic characteristics of this protein.”

The new “hypermagnetic” protein nanoparticles can be produced within cells, allowing the cells to be imaged or sorted using magnetic techniques. This eliminates the need to tag cells with synthetic particles and allows the particles to sense other molecules inside cells.

The paper’s lead author is former MIT graduate student Yuri Matsumoto. Other authors are graduate student Ritchie Chen and Polina Anikeeva, an assistant professor of materials science and engineering.

Magnetic pull

Previous research has yielded synthetic magnetic particles for imaging or tracking cells, but it can be difficult to deliver these particles into the target cells.

In the new study, Jasanoff and colleagues set out to create magnetic particles that are genetically encoded. With this approach, the researchers deliver a gene for a magnetic protein into the target cells, prompting them to start producing the protein on their own.

“Rather than actually making a nanoparticle in the lab and attaching it to cells or injecting it into cells, all we have to do is introduce a gene that encodes this protein,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

As a starting point, the researchers used ferritin, which carries a supply of iron atoms that every cell needs as components of metabolic enzymes. In hopes of creating a more magnetic version of ferritin, the researchers created about 10 million variants and tested them in yeast cells.

After repeated rounds of screening, the researchers used one of the most promising candidates to create a magnetic sensor consisting of enhanced ferritin modified with a protein tag that binds with another protein called streptavidin. This allowed them to detect whether streptavidin was present in yeast cells; however, this approach could also be tailored to target other interactions.

The mutated protein appears to successfully overcome one of the key shortcomings of natural ferritin, which is that it is difficult to load with iron, says Alan Koretsky, a senior investigator at the National Institute of Neurological Disorders and Stroke.

“To be able to make more magnetic indicators for MRI would be fabulous, and this is an important step toward making that type of indicator more robust,” says Koretsky, who was not part of the research team.

Sensing cell signals

Because the engineered ferritins are genetically encoded, they can be manufactured within cells that are programmed to make them respond only under certain circumstances, such as when the cell receives some kind of external signal, when it divides, or when it differentiates into another type of cell. Researchers could track this activity using magnetic resonance imaging (MRI), potentially allowing them to observe communication between neurons, activation of immune cells, or stem cell differentiation, among other phenomena.

Such sensors could also be used to monitor the effectiveness of stem cell therapies, Jasanoff says.

“As stem cell therapies are developed, it’s going to be necessary to have noninvasive tools that enable you to measure them,” he says. Without this kind of monitoring, it would be difficult to determine what effect the treatment is having, or why it might not be working.

The researchers are now working on adapting the magnetic sensors to work in mammalian cells. They are also trying to make the engineered ferritin even more strongly magnetic.

Young brains can take on new functions

In 2011, MIT neuroscientist Rebecca Saxe and colleagues reported that in blind adults, brain regions normally dedicated to vision processing instead participate in language tasks such as speech and comprehension. Now, in a study of blind children, Saxe’s lab has found that this transformation occurs very early in life, before the age of 4.

The study, appearing in the Journal of Neuroscience, suggests that the brains of young children are highly plastic, meaning that regions usually specialized for one task can adapt to new and very different roles. The findings also help to define the extent to which this type of remodeling is possible.

“In some circumstances, patches of cortex appear to take on other roles than the ones that they most typically have,” says Saxe, a professor of cognitive neuroscience and an associate member of MIT’s McGovern Institute for Brain Research. “One question that arises from that is, ‘What is the range of possible differences between what a cortical region typically does and what it could possibly do?’”

The paper’s lead author is Marina Bedny, a former MIT postdoc who is now an assistant professor at Johns Hopkins University. MIT graduate student Hilary Richardson is also an author of the paper.

Brain reorganization

The brain’s cortex, which carries out high-level functions such as thought, sensory processing, and initiation of movement, is made of sheets of neurons, each dedicated to a certain role. Within the visual system, located primarily in the occipital lobe, most neurons are tuned to respond only to a very specific aspect of visual input, such as brightness, orientation, or location in the field of view.

“There’s this big fundamental question, which is, ‘How did that organization get there, and to what degree can it be changed?’” Saxe says.

One possibility is that neurons in each patch of cortex have evolved to carry out specific roles, and can do nothing else. At the other extreme is the possibility that any patch of cortex can be recruited to perform any kind of computational task.

“The reality is somewhere in between those two,” Saxe says.

To study the extent to which cortex can change its function, scientists have focused on the visual cortex because they can learn a great deal about it by studying people who were born blind.

A landmark 1996 study of blind people found that their visual regions could participate in a nonvisual task — reading Braille. Some scientists theorized that perhaps the visual cortex is recruited for reading Braille because like vision, it requires discriminating very fine-grained patterns.

However, in their 2011 study, Saxe and Bedny found that the visual cortex of blind adults also responds to spoken language. “That was weird, because processing auditory language doesn’t require the kind of fine-grained spatial discrimination that Braille does,” Saxe says.

She and Bedny hypothesized that auditory language processing may develop in the occipital cortex by piggybacking onto the Braille-reading function. To test that idea, they began studying congenitally blind children, including some who had not learned Braille yet. They reasoned that if their hypothesis were correct, the occipital lobe would be gradually recruited for language processing as the children learned Braille.

However, they found that this was not the case. Instead, children as young as 4 already have language-related activity in the occipital lobe.

“The response of occipital cortex to language is not affected by Braille acquisition,” Saxe says. “It happens before Braille and it doesn’t increase with Braille.”

Language-related occipital activity was similar among all of the 19 blind children, who ranged in age from 4 to 17, suggesting that the entire process of occipital recruitment for language processing takes place before the age of 4, Saxe says. Bedny and Saxe have previously shown that this transition occurs only in people blind from birth, suggesting that there is an early critical period after which the cortex loses much of its plasticity.

The new study represents a huge step forward in understanding how the occipital cortex can take on new functions, says Ione Fine, an associate professor of psychology at the University of Washington.

“One thing that has been missing is an understanding of the developmental timeline,” says Fine, who was not involved in the research. “The insight here is that you get plasticity for language separate from plasticity for Braille and separate from plasticity for auditory processing.”

Language skills

The findings raise the question of how the extra language-processing centers in the occipital lobe affect language skills.

“This is a question we’ve always wondered about,” Saxe says. “Does it mean you’re better at those functions because you have more of your cortex doing it? Does it mean you’re more resilient in those functions because now you have more redundancy in your mechanism for doing it? You could even imagine the opposite: Maybe you’re less good at those functions because they’re distributed in an inefficient or atypical way.”

There are hints that the occipital lobe’s contribution to language-related functions “takes the pressure off the frontal cortex,” where language processing normally occurs, Saxe says. Other researchers have shown that suppressing left frontal cortex activity with transcranial magnetic stimulation interferes with language function in sighted people, but not in the congenitally blind.

This leads to the intriguing prediction that a congenitally blind person who suffers a stroke in the left frontal cortex may retain much more language ability than a sighted person would, Saxe says, although that hypothesis has not been tested.

Saxe’s lab is now studying children under 4 to try to learn more about how cortical functions develop early in life, while Bedny is investigating whether the occipital lobe participates in functions other than language in congenitally blind people.

Study links brain anatomy, academic achievement, and family income

Many years of research have shown that for students from lower-income families, standardized test scores and other measures of academic success tend to lag behind those of wealthier students.

A new study led by researchers at MIT and Harvard University offers another dimension to this so-called “achievement gap”: After imaging the brains of high- and low-income students, they found that the higher-income students had thicker brain cortex in areas associated with visual perception and knowledge accumulation. Furthermore, these differences also correlated with one measure of academic achievement — performance on standardized tests.

“Just as you would expect, there’s a real cost to not living in a supportive environment. We can see it not only in test scores, in educational attainment, but within the brains of these children,” says MIT’s John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, professor of brain and cognitive sciences, and one of the study’s authors. “To me, it’s a call to action. You want to boost the opportunities for those for whom it doesn’t come easily in their environment.”

This study did not explore possible reasons for these differences in brain anatomy. However, previous studies have shown that lower-income students are more likely to suffer from stress in early childhood, have more limited access to educational resources, and receive less exposure to spoken language early in life. These factors have all been linked to lower academic achievement.

In recent years, the achievement gap in the United States between high- and low-income students has widened, even as gaps along lines of race and ethnicity have narrowed, says Martin West, an associate professor of education at the Harvard Graduate School of Education and an author of the new study.

“The gap in student achievement, as measured by test scores between low-income and high-income students, is a pervasive and longstanding phenomenon in American education, and indeed in education systems around the world,” he says. “There’s a lot of interest among educators and policymakers in trying to understand the sources of those achievement gaps, but even more interest in possible strategies to address them.”

Allyson Mackey, a postdoc at MIT’s McGovern Institute for Brain Research, is the lead author of the paper, which appears the journal Psychological Science. Other authors are postdoc Amy Finn; graduate student Julia Leonard; Drew Jacoby-Senghor, a postdoc at Columbia Business School; and Christopher Gabrieli, chair of the nonprofit Transforming Education.

Explaining the gap

The study included 58 students — 23 from lower-income families and 35 from higher-income families, all aged 12 or 13. Low-income students were defined as those who qualify for a free or reduced-price school lunch.

The researchers compared students’ scores on the Massachusetts Comprehensive Assessment System (MCAS) with brain scans of a region known as the cortex, which is key to functions such as thought, language, sensory perception, and motor command.

Using magnetic resonance imaging (MRI), they discovered differences in the thickness of parts of the cortex in the temporal and occipital lobes, whose primary roles are in vision and storing knowledge. Those differences correlated to differences in both test scores and family income. In fact, differences in cortical thickness in these brain regions could explain as much as 44 percent of the income achievement gap found in this study.

Previous studies have also shown brain anatomy differences associated with income, but did not link those differences to academic achievement.

“A number of labs have reported differences in children’s brain structures as a function of family income, but this is the first to relate that to variation in academic achievement,” says Kimberly Noble, an assistant professor of pediatrics at Columbia University who was not part of the research team.

In most other measures of brain anatomy, the researchers found no significant differences. The amount of white matter — the bundles of axons that connect different parts of the brain — did not differ, nor did the overall surface area of the brain cortex.

The researchers point out that the structural differences they did find are not necessarily permanent. “There’s so much strong evidence that brains are highly plastic,” says Gabrieli, who is also a member of the McGovern Institute. “Our findings don’t mean that further educational support, home support, all those things, couldn’t make big differences.”

In a follow-up study, the researchers hope to learn more about what types of educational programs might help to close the achievement gap, and if possible, investigate whether these interventions also influence brain anatomy.

“Over the past decade we’ve been able to identify a growing number of educational interventions that have managed to have notable impacts on students’ academic achievement as measured by standardized tests,” West says. “What we don’t know anything about is the extent to which those interventions — whether it be attending a very high-performing charter school, or being assigned to a particularly effective teacher, or being exposed to a high-quality curricular program — improves test scores by altering some of the differences in brain structure that we’ve documented, or whether they had those effects by other means.”

The research was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.