Rethinking mental illness treatment

McGovern researchers are finding neural markers that could help improve treatment for psychiatric patients.

Ten years ago, Jim and Pat Poitras committed $20M to the McGovern Institute to establish the Poitras Center for Affective Disorders Research. The Poitras family had been longtime supporters of MIT, and because they had seen mental illness in their own family, they decided to support an ambitious new program at the McGovern Institute, with the goal of understanding the fundamental biological basis of depression, bipolar disorder, schizophrenia and other major psychiatric disorders.

The gift came at an opportune time, as the field was entering a new phase of discovery, with rapid advances in psychiatric genomics and brain imaging, and with the emergence of new technologies for genome editing and for the development of animal models. Over the past ten years, the Poitras Center has supported work in each of these areas, including Feng Zhang’s work on CRISPR-based genome editing, and Guoping Feng’s work on animal models for autism, schizophrenia and other psychiatric disorders.

This reflects a long-term strategy, says Robert Desimone, director of the McGovern Institute who oversees the Poitras Center. “But we must not lose sight of the overall goal, which is to benefit human patients. Insights from animal models and genomic medicine have the potential to transform the treatments of the future, but we are also interested in the nearer term, and in what we can do right now.”

One area where technology can have a near-term impact is human brain imaging, and in collaboration with clinical researchers at McLean Hospital, Massachusetts General Hospital and other institutions, the Poitras Center has supported an ambitious program to bring human neuroimaging closer to the clinic.

Discovering psychiatry’s crystal ball

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. The success rates for the first treatments are often less than 50%, and finding what works for an individual patient often means a long and painful process of trial and error.

“Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Satra Ghosh.

McGovern research scientist Susan Whitfield-Gabrieli and her colleagues are hoping to change this picture, with the help of brain imaging. Their findings suggest that brain scans can hold valuable information for psychiatrists and their patients. “We need a paradigm shift in how we use imaging. It can be used for more than research,” says Whitfield-Gabrieli, who is a member of McGovern Investigator John Gabrieli’s lab. “It would be a really big boost to be able use it to personalize psychiatric medicine.”

One of Whitfield-Gabrieli’s goals is to find markers that can predict which treatments will work for which patients. Another is to find markers that can predict the likely risk of disease in the future, allowing doctors to intervene before symptoms first develop. All of these markers need further validation before they are ready for the clinic, but they have the potential to meet a dire need to improve treatment for psychiatric disease.

A brain at rest

For Whitfield-Gabrieli, who both collaborates with and is married to Gabrieli, that paradigm shift began when she started to study the resting brain using functional magnetic resonance imaging (fMRI). Most brain imaging studies require the subject to perform a mental task in the scanner, but these are time-consuming and often hard to replicate in a clinical setting.In contrast, resting state imaging requires no task. The subject simply lies in the scanner and lets the mind wander. The patterns of activity can reveal functional connections within the brain, and are reliably consistent from study to study.

Whitfield-Gabrieli thought resting state scanning had the potential to help patients because it is simple and easy to perform.

“Even a 5-minute scan can contain useful information that could help people,” says Satrajit Ghosh, a principal research scientist in the Gabrieli lab who works closely with Whitfield-Gabrieli.

Whitfield-Gabrieli and her clinical collaborator Larry Seidman at Harvard Medical School decided to study resting state activity in patients with schizophrenia. They found a pattern of activity strikingly different from that of typical brains. The patients showed unusually strong activity in a set of interconnected brain regions known as the default mode network, which is typically activated during introspection. It is normally suppressed when a person attends to the outside world, but schizophrenia patients failed to show this suppression.

“The patient isn’t able to toggle between internal processing and external processing the way a typical individual can,” says Whitfield-Gabrieli, whose work is supported by the Poitras Center for Affective Disorders Research.

Since then, the team has observed similar disturbances in the default network in other disorders, including depression, anxiety, bipolar disorder, and ADHD. “We knew we were onto something interesting,” says Whitfield-Gabrieli. “But we kept coming back to the question: how can brain imaging help patients?”

fMRI on patients

Many imaging studies aim to understand the biological basis of disease and ultimately to guide the development of new drugs or other treatments. But this is a long-term goal, and Whitfield-Gabrieli wanted to find ways that brain imaging could have a more immediate impact. So she and Ghosh decided to use fMRI to look at differences among individual patients, and to focus on differences in how they responded to treatment.

“It gave us something objective to measure,” explains Ghosh. “Someone goes through a treatment, and they either get better or they don’t.” The project also had appeal for Ghosh because it was an opportunity for him to use his expertise in machine learning and other computational tools to build systems-level models of the brain.

For the first study, the team decided to focus on social anxiety disorder (SAD), which is typically treated with either prescription drugs or cognitive behavioral therapy (CBT). Both are moderately effective, but many patients do not respond to the first treatment they try.

The team began with a small study to test whether scans performed before the onset of treatment could predict who would respond best to the treatment. Working with Stefan Hofmann, a clinical psychologist at Boston University, they scanned 38 SAD patients before they began a 12-week course of CBT. At the end of their treatment, the patients were evaluated for clinical improvement, and the researchers examined the scans for patterns of activity that correlated with the improvement. The results were very encouraging; it turned out that predictions based on scan data were 5-fold better than the existing methods based on severity of symptoms at the time of diagnosis.

The researchers then turned to another condition, ADHD, which presents a similar clinical challenge, in that commonly used drugs—such as Adderall or Ritalin—work well, but not for everyone. So the McGovern team began a collaboration with psychiatrist Joseph Biederman, Chief of Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD
at Massachusetts General Hospital, on a similar study, looking for markers of treatment response.

The study is still ongoing, and it will be some time before results emerge, but the researchers are optimistic. “If we could predict who would respond to which treatment and avoid months of trial and error, it would be totally transformative for ADHD,” says Biederman.

Another goal is to predict in advance who is likely to develop a given disease in the future. The researchers have scanned children who have close relatives with schizophrenia or depression, and who are therefore at increased risk of developing these disorders themselves. Surprisingly, the children show patterns of resting state connectivity similar to those of patients.

“I was really intrigued by this,” says Whitfield-Gabrieli. “Even though these children are not sick, they have the same profile as adults who are.”

Whitfield-Gabrieli and Seidman are now expanding their study through a collaboration with clinical researchers at the Shanghai Mental Institute in China, who plan to image and then follow 225 people who are showing early risk signs for schizophrenia. They hope to find markers that predict who will develop the disease and who will not.

“While there are no drugs available to prevent schizophrenia, it may be possible to reduce the risk or severity of the disorder through CBT, or through interventions that reduce stress and improve sleep and well-being,” says Whitfield-Gabrieli. “One likely key to success is early identification of those at highest risk. If we could diagnose early, we could do early interventions
and potentially prevent disorders.”

From association to prediction

The search for predictive markers represents a departure from traditional psychiatric imaging studies, in which a group of patients is compared with a control group of healthy subjects. Studies of this type can reveal average differences between the groups, which may provide clues to the underlying biology of the disease. But they don’t provide information about individual patients, and so they have not been incorporated into clinical practice.

The difference is critical for clinicians, says Biederman. “I treat individuals, not groups. To bring predictive scans to the clinic, we need to be sure the individual scan is informative for the person you are treating.”

To develop these predictions, Whitfield-Gabrieli and Ghosh must first use sophisticated computational methods such as ‘deep learning’ to identify patterns in their data and to build models that relate the patterns to the clinical outcomes. They must then show that these models can generalize beyond the original study population—for example, that predictions based on patients from Boston can be applied to patients from Shanghai. The eventual goal is a model that can analyze a previously unseen brain scan from any individual, and predict with high confidence whether that person will (for example) develop schizophrenia or respond successfully to a particular therapy.

Achieving this will be challenging, because it will require scanning and following large numbers of subjects from diverse demographic groups—thousands of people, not just tens or hundreds
as in most clinical studies. Collaborations with large hospitals, such as the one in Shanghai, can help. Whitfield-Gabrieli has also received funding to collect imaging, clinical, and behavioral
data from over 200 adolescents with depression and anxiety, as part of the National Institutes of Health’s Human Connectome effort. These data, collected in collaboration with clinicians at
McLean Hospital, MGH and Boston University, will be available not only for the Gabrieli team, but for researchers anywhere to analyze. This is important, because no one team or center can
do it alone, says Ghosh. “Data must be collected by many and shared by all.”

The ultimate goal is to study as many patients as possible now so that the tools can help many more later. “Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Ghosh. “We’re still far away from that. But that is what we want to work towards.”

Neuroscientists get a glimpse into the workings of the baby brain

In adults, certain regions of the brain’s visual cortex respond preferentially to specific types of input, such as faces or objects — but how and when those preferences arise has long puzzled neuroscientists.

One way to help answer that question is to study the brains of very young infants and compare them to adult brains. However, scanning the brains of awake babies in an MRI machine has proven difficult.

Now, neuroscientists at MIT have overcome that obstacle, adapting their MRI scanner to make it easier to scan infants’ brains as the babies watch movies featuring different types of visual input. Using these data, the team found that in some ways, the organization of infants’ brains is surprisingly similar to that of adults. Specifically, brain regions that respond to faces in adults do the same in babies, as do regions that respond to scenes.

“It suggests that there’s a stronger biological predisposition than I would have guessed for specific cortical regions to end up with specific functions,” says Rebecca Saxe, a professor of brain and cognitive sciences and member of MIT’s McGovern Institute for Brain Research.

Saxe is the senior author of the study, which appears in the Jan. 10 issue of Nature Communications. The paper’s lead author is former MIT graduate student Ben Deen, who is now a postdoc at Rockefeller University.

MRI adaptations

Functional MRI (magnetic resonance imaging) is the go-to technique for studying brain function in adults. However, very few researchers have taken on the challenge of trying to scan babies’ brains, especially while they are awake.

“Babies and MRI machines have very different needs,” Saxe points out. “Babies would like to do activities for two or three minutes and then move on. They would like to be sitting in a comfortable position, and in charge of what they’re looking at.”

On the other hand, “MRI machines would like to be loud and dark and have a person show up on schedule, stay still for the entire time, pay attention to one thing for two hours, and follow instructions closely,” she says.

To make the setup more comfortable for babies, the researchers made several modifications to the MRI machine and to their usual experimental protocols. First, they built a special coil (part of the MRI scanner that acts as a radio antenna) that allows the baby to recline in a seat similar to a car seat. A mirror in front of the baby’s face allows him or her to watch videos, and there is space in the machine for a parent or one of the researchers to sit with the baby.

The researchers also made the scanner much less noisy than a typical MRI machine. “It’s quieter than a loud restaurant,” Saxe says. “The baby can hear their parent talking over the sound of the scanner.”

Once the babies, who were 4 to 6 months old, were in the scanner, the researchers played the movies continuously while scanning the babies’ brains. However, they only used data from the time periods when the babies were actively watching the movies. From 26 hours of scanning 17 babies, the researchers obtained four hours of usable data from nine babies.

“The sheer tenacity of this work is truly amazing,” says Charles Nelson, a professor of pediatrics at Boston Children’s Hospital, who was not involved in the research. “The fact that they pulled this off is incredibly novel.”

Obtaining this data allowed the MIT team to study how infants’ brains respond to specific types of sensory input, and to compare their responses with those of adults.

“The big-picture question is, how does the adult brain come to have the structure and function that you see in adulthood? How does it get like that?” Saxe says. “A lot of the answer to that question will depend on having the tools to be able to see the baby brain in action. The more we can see, the more we can ask that kind of question.”

Distinct preferences

The researchers showed the babies videos of either smiling children or outdoor scenes such as a suburban street seen from a moving car. Distinguishing social scenes from the physical environment is one of the main high-level divisions that our brains make when interpreting the world.

“The questions we’re asking are about how you understand and organize your world, with vision as the main modality for getting you into these very different mindsets,” Saxe says. “In adults, there are brain regions that prefer to look at faces and socially relevant things, and brain regions that prefer to look at environments and objects.”

The scans revealed that many regions of the babies’ visual cortex showed the same preferences for scenes or faces seen in adult brains. This suggests that these preferences form within the first few months of life and refutes the hypothesis that it takes years of experience interpreting the world for the brain to develop the responses that it shows in adulthood.

The researchers also found some differences in the way that babies’ brains respond to visual stimuli. One is that they do not seem to have regions found in the adult brain that are “highly selective,” meaning these regions prefer features such as human faces over any other kind of input, including human bodies or the faces of other animals. The babies also showed some differences in their responses when shown examples from four different categories — not just faces and scenes but also bodies and objects.

“We believe that the adult-like organization of infant visual cortex provides a scaffolding that guides the subsequent refinement of responses via experience, ultimately leading to the strongly specialized regions observed in adults,” Deen says.

Saxe and colleagues now hope to try to scan more babies between the ages of 3 and 8 months so they can get a better idea of how these vision-processing regions change over the first several months of life. They also hope to study even younger babies to help them discover when these distinctive brain responses first appear.

Distinctive brain pattern may underlie dyslexia

A distinctive neural signature found in the brains of people with dyslexia may explain why these individuals have difficulty learning to read, according to a new study from MIT neuroscientists.

The researchers discovered that in people with dyslexia, the brain has a diminished ability to acclimate to a repeated input — a trait known as neural adaptation. For example, when dyslexic students see the same word repeatedly, brain regions involved in reading do not show the same adaptation seen in typical readers.

This suggests that the brain’s plasticity, which underpins its ability to learn new things, is reduced, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

“It’s a difference in the brain that’s not about reading per se, but it’s a difference in perceptual learning that’s pretty broad,” says Gabrieli, who is the study’s senior author. “This is a path by which a brain difference could influence learning to read, which involves so many demands on plasticity.”

Former MIT graduate student Tyler Perrachione, who is now an assistant professor at Boston University, is the lead author of the study, which appears in the Dec. 21 issue of Neuron.

Reduced plasticity

The MIT team used magnetic resonance imaging (MRI) to scan the brains of young adults with and without reading difficulties as they performed a variety of tasks. In the first experiment, the subjects listened to a series of words read by either four different speakers or a single speaker.

The MRI scans revealed distinctive patterns of activity in each group of subjects. In nondyslexic people, areas of the brain that are involved in language showed neural adaption after hearing words said by the same speaker, but not when different speakers said the words. However, the dyslexic subjects showed much less adaptation to hearing words said by a single speaker.

Neurons that respond to a particular sensory input usually react strongly at first, but their response becomes muted as the input continues. This neural adaptation reflects chemical changes in neurons that make it easier for them to respond to a familiar stimulus, Gabrieli says. This phenomenon, known as plasticity, is key to learning new skills.

“You learn something upon the initial presentation that makes you better able to do it the second time, and the ease is marked by reduced neural activity,” Gabrieli says. “Because you’ve done something before, it’s easier to do it again.”

The researchers then ran a series of experiments to test how broad this effect might be. They asked subjects to look at series of the same word or different words; pictures of the same object or different objects; and pictures of the same face or different faces. In each case, they found that in people with dyslexia, brain regions devoted to interpreting words, objects, and faces, respectively, did not show neural adaptation when the same stimuli were repeated multiple times.

“The brain location changed depending on the nature of the content that was being perceived, but the reduced adaptation was consistent across very different domains,” Gabrieli says.

He was surprised to see that this effect was so widespread, appearing even during tasks that have nothing to do with reading; people with dyslexia have no documented difficulties in recognizing objects or faces.

He hypothesizes that the impairment shows up primarily in reading because deciphering letters and mapping them to sounds is such a demanding cognitive task. “There are probably few tasks people undertake that require as much plasticity as reading,” Gabrieli says.

Early appearance

In their final experiment, the researchers tested first and second graders with and without reading difficulties, and they found the same disparity in neural adaptation.

“We got almost the identical reduction in plasticity, which suggests that this is occurring quite early in learning to read,” Gabrieli says. “It’s not a consequence of a different learning experience over the years in struggling to read.”

Gabrieli’s lab now plans to study younger children to see if these differences might be apparent even before children begin to learn to read. They also hope to use other types of brain measurements such as magnetoencephalography (MEG) to follow the time course of the neural adaptation more closely.

The research was funded by the Ellison Medical Foundation, the National Institutes of Health, and a National Science Foundation Graduate Research Fellowship.

A radiation-free approach to imaging molecules in the brain

Scientists hoping to get a glimpse of molecules that control brain activity have devised a new probe that allows them to image these molecules without using any chemical or radioactive labels.

Currently the gold standard approach to imaging molecules in the brain is to tag them with radioactive probes. However, these probes offer low resolution and they can’t easily be used to watch dynamic events, says Alan Jasanoff, an MIT professor of biological engineering.

Jasanoff and his colleagues have developed new sensors consisting of proteins designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” Jasanoff says. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

In a paper appearing in the Dec. 2 issue of Nature Communications, Jasanoff and his colleagues used these probes to detect enzymes called proteases, but their ultimate goal is to use them to monitor the activity of neurotransmitters, which act as chemical messengers between brain cells.

The paper’s lead authors are postdoc Mitul Desai and former MIT graduate student Adrian Slusarczyk. Recent MIT graduate Ashley Chapin and postdoc Mariya Barch are also authors of the paper.

Indirect imaging

To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

“These are molecules that aren’t visualized directly, but instead produce changes in the body that can then be visualized very effectively by imaging,” Jasanoff says.

Proteases are sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease. However, Jasanoff’s lab used them in this study mainly to demonstrate the validity their approach. Now, they are working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, that are critical to cognition and processing emotions.

To do that, the researchers plan to modify the cages surrounding the CGRP so that they can be removed by interaction with a particular neurotransmitter.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

Tracking genes

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.

Finding a way in

Our perception of the world arises within the brain, based on sensory information that is sometimes ambiguous, allowing more than one interpretation. Familiar demonstrations of this point include the famous Necker cube and the “duck-rabbit” drawing (right) in which two different interpretations flip back and forth over time.

Another example is binocular rivalry, in which the two eyes are presented with different images that are perceived in alternation. Several years ago, this phenomenon caught the eye of Caroline Robertson, who is now a Harvard Fellow working in the lab of McGovern Investigator Nancy Kanwisher. Back when she was a graduate student at Cambridge University, Robertson realized that binocular rivalry might be used to probe the basis of autism, among the most mysterious of all brain disorders.

Robertson’s idea was based on the hypothesis that autism involves an imbalance between excitation and inhibition within the brain. Although widely supported by indirect evidence, this has been very difficult to test directly in human patients. Robertson realized that binocular rivalry might provide a way to perform such a test. The perceptual switches that occur during rivalry are thought to involve competition between different groups of neurons in the visual cortex, each group reinforcing its own interpretation via excitatory connections while suppressing the alternative interpretation through inhibitory connections. Thus, if the balance is altered in the brains of people with autism, the frequency of switching might also be different, providing a simple and easily measurable marker of the disease state.

To test this idea, Robertson recruited adults with and without autism, and presented them with two distinct and differently colored images in each eye. As expected, their perceptions switched back and forth between the two images, with short periods of mixed perception in between. This was true for both groups, but when she measured the timing of these switches, Robertson found that individuals with autism do indeed see the world in a measurably different way than people without the disorder. Individuals with autism cycle between the left and right images more slowly, with the intervening periods of mixed perception lasting longer than in people without autism. The more severe their autistic symptoms, as determined by a standard clinical behavioral evaluation, the greater the difference.

Robertson had found a marker for autism that is more objective than current methods that involve one person assessing the behavior of another. The measure is immediate and relies on brain activity that happens automatically, without people thinking about it. “Sensation is a very simple place to probe,” she says.

A top-down approach

When she arrived in Kanwisher’s lab, Robertson wanted to use brain imaging to probe the basis for the perceptual phenomenon that she had discovered. With Kanwisher’s encouragement, she began by repeating the behavioral experiment with a new group of subjects, to check that her previous results were not a fluke. Having confirmed that the finding was real, she then scanned the subjects using an imaging method called Magnetic Resonance Spectroscopy (MRS), in which an MRI scanner is reprogrammed to measure concentrations of neurotransmitters and other chemicals in the brain. Kanwisher had never used MRS before, but when Robertson proposed the experiment, she was happy to try it. “Nancy’s the kind of mentor who could support the idea of using a new technique and guide me to approach it rigorously,” says Robertson.

For each of her subjects, Robertson scanned their brains to measure the amounts of two key neurotransmitters, glutamate, which is the main excitatory transmitter in the brain, and GABA, which is the main source of inhibition. When she compared the brain chemistry to the behavioral results in the binocular rivalry task, she saw something intriguing and unexpected. In people without autism, the amount of GABA in the visual cortex was correlated with the strength of the suppression, consistent with the idea that GABA enables signals from one eye to inhibit those from the other eye. But surprisingly, there was no such correlation in the autistic individuals—suggesting that GABA was somehow unable to exert its normal suppressive effect. It isn’t yet clear exactly what is going wrong in the brains of these subjects, but it’s an early flag, says Robertson. “The next step is figuring out which part of the pathway is disrupted.”

A bottom-up approach

Robertson’s approach starts from the top-down, working backward from a measurable behavior to look for brain differences, but it isn’t the only way in. Another approach is to start with genes that are linked to autism in humans, and to understand how they affect neurons and brain circuits. This is the bottom-up approach of McGovern Investigator Guoping Feng, who studies a gene called Shank3 that codes for a protein that helps build synapses, the connections through which neurons send signals to each other. Several years ago Feng knocked out Shank3 in mice, and found that the mice exhibited behaviors reminiscent of human autism, including repetitive grooming, anxiety, and impaired social interaction and motor control.

These earlier studies involved a variety of different mutations that disabled the Shank3 gene. But when postdoc Yang Zhou joined Feng’s lab, he brought a new perspective. Zhou had come from a medical background and wanted to do an experiment more directly connected to human disease. So he suggested making a mouse version of a Shank3 mutation seen in human patients, and testing its effects.

Zhou’s experiment would require precise editing of the mouse Shank3 gene, previously a difficult and time-consuming task. But help was at hand, in the form of a collaboration with McGovern Investigator Feng Zhang, a pioneer in the development of genome-editing methods.

Using Zhang’s techniques, Zhou was able to generate mice with two different mutations: one that had been linked to human autism, and another that had been discovered in a few patients with schizophrenia.

The researchers found that mice with the autism-related mutation exhibited behavioral changes at a young age that paralleled behaviors seen in children with autism. They also found early changes in synapses within a brain region called the striatum. In contrast, mice with the schizophrenia-related gene appeared normal until adolescence, and then began to exhibit changes in behavior and also changes in the prefrontal cortex, a brain region that is implicated in human schizophrenia. “The consequences of the two different Shank3 mutations were quite different in certain aspects, which was very surprising to us,” says Zhou.

The fact that different mutations in just one gene can produce such different results illustrates exactly how complex these neuropsychiatric disorders can be. “Not only do we need to study different genes, but we also have to understand different mutations and which brain regions have what defects,” says Feng, who received funding from the Poitras Center for Affective Disorders research and the Simons Center for the Social Brain. Robertson and Kanwisher were also supported by the Simons Center.

Surprising plasticity

The brain alterations that lead to autism are thought to arise early in development, long before the condition is diagnosed, raising concerns that it may be difficult to reverse the effects once the damage is done. With the Shank3 knockout mice, Feng and his team were able to approach this question in a new way, asking what would happen if the missing gene were to be restored in adulthood.

To find the answer, lab members Yuan Mei and Patricia Monteiro, along with Zhou, studied another strain of mice, in which the Shank3 gene was switched off but could be reactivated at any time by adding a drug to their diet. When adult mice were tested six weeks after the gene was switched back on, they no longer showed repetitive grooming behaviors, and they also showed normal levels of social interaction with other mice, despite having grown up without a functioning Shank3 gene. Examination of their brains confirmed that many of the synaptic alterations were also rescued when the gene was restored.

Not every symptom was reversed by this treatment; even after six weeks or more of restored Shank3 expression, the mice continued to show heightened anxiety and impaired motor control. But even these deficits could be prevented if the Shank3 gene was restored earlier in life, soon after birth.

The results are encouraging because they indicate a surprising degree of brain plasticity, persisting into adulthood. If the results can be extrapolated to human patients, they suggest that even in adulthood, autism may be at least partially reversible if the right treatment can be found. “This shows us the possibility,” says Zhou. “If we could somehow put back the gene in patients who are missing it, it could help improve their life quality.”

Converging paths

Robertson and Feng are approaching the challenge of autism from different starting points, but already there are signs of convergence. Feng is finding early signs that his Shank3 mutant mice may have an altered balance of inhibitory and excitatory circuits, consistent with what Robertson and Kanwisher have found in humans.

Feng is continuing to study these mice, and he also hopes to study the effects of a similar mutation in non-human primates, whose brains and behaviors are more similar to those of humans than rodents. Robertson, meanwhile, is planning to establish a version of the binocular rivalry test in animal models, where it is possible to alter the balance between inhibition and excitation experimentally (for example, via a genetic mutation or a drug treatment). If this leads to changes in binocular rivalry, it would strongly support the link to the perceptual changes seen in humans.

One challenge, says Robertson, will be to develop new methods to measure the perceptions of mice and other animals. “The mice can’t tell us what they are seeing,” she says. “But it would also be useful in humans, because it would allow us to study young children and patients who are non-verbal.”

A multi-pronged approach

The imbalance hypothesis is a promising lead, but no single explanation is likely to encompass all of autism, according to McGovern director Bob Desimone. “Autism is a notoriously heterogeneous condition,” he explains. “We need to try multiple approaches in order to maximize the chance of success.”

McGovern researchers are doing exactly that, with projects underway that range from scanning children to developing new molecular and microscopic methods for examining brain changes in animal disease models. Although genetic studies provide some of the strongest clues, Desimone notes that there is also evidence for environmental contributions to autism and other brain disorders. “One that’s especially interesting to us is a maternal infection and inflammation, which in mice at least can affect brain development in ways we’re only beginning to understand.”

The ultimate goal, says Desimone, is to connect the dots and to understand how these diverse human risk factors affect brain function. “Ultimately, we want to know what these different pathways have in common,” he says. “Then we can come up with rational strategies for the development of new treatments.”

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.”

Robertson is the lead author of the study, which appears in the Sept. 8 issue of the journal Current Biology. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s lead author.

Building memories

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson says.

The researchers explored this hypothesis using immersive virtual reality headsets, which allowed them to show people many different panoramic scenes. In this study, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighborhood. The images were presented in two ways: Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner. Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

Priming the brain

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.

“After you have seen a series of views of a panoramic environment, you have explicitly linked them in memory to a known place,” Robertson says. “They also evoke overlapping visual representations in certain regions of the brain, which is implicitly guiding your upcoming perceptual experience.”

The research was funded by the National Science Foundation Science and Technology Center for Brains, Minds, and Machines; and the Harvard Milton Fund.

Study finds brain connections key to learning

A new study from MIT reveals that a brain region dedicated to reading has connections for that skill even before children learn to read.

By scanning the brains of children before and after they learned to read, the researchers found that they could predict the precise location where each child’s visual word form area (VWFA) would develop, based on the connections of that region to other parts of the brain.

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago, which is not enough time for evolution to have reshaped the brain for that specific task. The new study suggests that the VWFA, located in an area that receives visual input, has pre-existing connections to brain regions associated with language processing, making it ideally suited to become devoted to reading.

“Long-range connections that allow this region to talk to other areas of the brain seem to drive function,” says Zeynep Saygin, a postdoc at MIT’s McGovern Institute for Brain Research. “As far as we can tell, within this larger fusiform region of the brain, only the reading area has these particular sets of connections, and that’s how it’s distinguished from adjacent cortex.”

Saygin is the lead author of the study, which appears in the Aug. 8 issue of Nature Neuroscience. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s senior author.

Specialized for reading

The brain’s cortex, where most cognitive functions occur, has areas specialized for reading as well as face recognition, language comprehension, and many other tasks. Neuroscientists have hypothesized that the locations of these functions may be determined by prewired connections to other parts of the brain, but they have had few good opportunities to test this hypothesis.

Reading presents a unique opportunity to study this question because it is not learned right away, giving scientists a chance to examine the brain region that will become the VWFA before children know how to read. This region, located in the fusiform gyrus, at the base of the brain, is responsible for recognizing strings of letters.

Children participating in the study were scanned twice — at 5 years of age, before learning to read, and at 8 years, after they learned to read. In the scans at age 8, the researchers precisely defined the VWFA for each child by using functional magnetic resonance imaging (fMRI) to measure brain activity as the children read. They also used a technique called diffusion-weighted imaging to trace the connections between the VWFA and other parts of the brain.

The researchers saw no indication from fMRI scans that the VWFA was responding to words at age 5. However, the region that would become the VWFA was already different from adjacent cortex in its connectivity patterns. These patterns were so distinctive that they could be used to accurately predict the precise location where each child’s VWFA would later develop.

Although the area that will become the VWFA does not respond preferentially to letters at age 5, Saygin says it is likely that the region is involved in some kind of high-level object recognition before it gets taken over for word recognition as a child learns to read. Still unknown is how and why the brain forms those connections early in life.

Pre-existing connections

Kanwisher and Saygin have found that the VWFA is connected to language regions of the brain in adults, but the new findings in children offer strong evidence that those connections exist before reading is learned, and are not the result of learning to read, according to Stanislas Dehaene, a professor and the chair of experimental cognitive psychology at the College de France, who wrote a commentary on the paper for Nature Neuroscience.

“To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read,” wrote Dehaene, who was not involved in the study. “Although many children, at the age of 5, did not have a VWFA yet, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.”

The MIT team now plans to study whether this kind of brain imaging could help identify children who are at risk of developing dyslexia and other reading difficulties.

“It’s really powerful to be able to predict functional development three years ahead of time,” Saygin says. “This could be a way to use neuroimaging to try to actually help individuals even before any problems occur.”

Diagnosing depression before it starts

A new brain imaging study from MIT and Harvard Medical School may lead to a screen that could identify children at high risk of developing depression later in life.

In the study, the researchers found distinctive brain differences in children known to be at high risk because of family history of depression. The finding suggests that this type of scan could be used to identify children whose risk was previously unknown, allowing them to undergo treatment before developing depression, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“We’d like to develop the tools to be able to identify people at true risk, independent of why they got there, with the ultimate goal of maybe intervening early and not waiting for depression to strike the person,” says Gabrieli, an author of the study, which appears in the journal Biological Psychiatry.

Early intervention is important because once a person suffers from an episode of depression, they become more likely to have another. “If you can avoid that first bout, maybe it would put the person on a different trajectory,” says Gabrieli, who is a member of MIT’s McGovern Institute for Brain Research.

The paper’s lead author is McGovern Institute postdoc Xiaoqian Chai, and the senior author is Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute.

Distinctive patterns

The study also helps to answer a key question about the brain structures of depressed patients. Previous imaging studies have revealed two brain regions that often show abnormal activity in these patients: the subgenual anterior cingulate cortex (sgACC) and the amygdala. However, it was unclear if those differences caused depression or if the brain changed as the result of a depressive episode.

To address that issue, the researchers decided to scan brains of children who were not depressed, according to their scores on a commonly used diagnostic questionnaire, but had a parent who had suffered from the disorder. Such children are three times more likely to become depressed later in life, usually between the ages of 15 and 30.

Gabrieli and colleagues studied 27 high-risk children, ranging in age from eight to 14, and compared them with a group of 16 children with no known family history of depression.

Using functional magnetic resonance imaging (fMRI), the researchers measured synchronization of activity between different brain regions. Synchronization patterns that emerge when a person is not performing any particular task allow scientists to determine which regions naturally communicate with each other.

The researchers identified several distinctive patterns in the at-risk children. The strongest of these links was between the sgACC and the default mode network — a set of brain regions that is most active when the mind is unfocused. This abnormally high synchronization has also been seen in the brains of depressed adults.

The researchers also found hyperactive connections between the amygdala, which is important for processing emotion, and the inferior frontal gyrus, which is involved in language processing. Within areas of the frontal and parietal cortex, which are important for thinking and decision-making, they found lower than normal connectivity.

Cause and effect

These patterns are strikingly similar to those found in depressed adults, suggesting that these differences arise before depression occurs and may contribute to the development of the disorder, says Ian Gotlib, a professor of psychology at Stanford University.

“The findings are consistent with an explanation that this is contributing to the onset of the disease,” says Gotlib, who was not involved in the research. “The patterns are there before the depressive episode and are not due to the disorder.”

The MIT team is continuing to track the at-risk children and plans to investigate whether early treatment might prevent episodes of depression. They also hope to study how some children who are at high risk manage to avoid the disorder without treatment.

Other authors of the paper are Dina Hirshfeld-Becker, an associate professor of psychiatry at Harvard Medical School; Joseph Biederman, director of pediatric psychopharmacology at Massachusetts General Hospital (MGH); Mai Uchida, an assistant professor of psychiatry at Harvard Medical School; former MIT postdoc Oliver Doehrmann; MIT graduate student Julia Leonard; John Salvatore, a former McGovern technical assistant; MGH research assistants Tara Kenworthy and Elana Kagan; Harvard Medical School postdoc Ariel Brown; and former MIT technical assistant Carlo de los Angeles.

Study finds altered brain chemistry in people with autism

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Music in the brain

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.