Socioeconomic background linked to reading improvement

About 20 percent of children in the United States have difficulty learning to read, and educators have devised a variety of interventions to try to help them. Not every program helps every student, however, in part because the origins of their struggles are not identical.

MIT neuroscientist John Gabrieli is trying to identify factors that may help to predict individual children’s responses to different types of reading interventions. As part of that effort, he recently found that children from lower-income families responded much better to a summer reading program than children from a higher socioeconomic background.

Using magnetic resonance imaging (MRI), the research team also found anatomical changes in the brains of children whose reading abilities improved — in particular, a thickening of the cortex in parts of the brain known to be involved in reading.

“If you just left these children [with reading difficulties] alone on the developmental path they’re on, they would have terrible troubles reading in school. We’re taking them on a neuroanatomical detour that seems to go with real gains in reading ability,” says Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Rachel Romeo, a graduate student in the Harvard-MIT Program in Health Sciences and Technology, and Joanna Christodoulou, an assistant professor of communication sciences and disorders at the Massachusetts General Hospital Institute of Health Professions, are the lead authors of the paper, which appears in the June 7 issue of the journal Cerebral Cortex.

Predicting improvement

In hopes of identifying factors that influence children’s responses to reading interventions, the MIT team set up two summer schools based on a program known as Lindamood-Bell. The researchers recruited students from a wide income range, although socioeconomic status was not the original focus of their study.

The Lindamood-Bell program focuses on helping students develop the sensory and cognitive processing necessary for reading, such as thinking about words as units of sound, and translating printed letters into word meanings.

Children participating in the study, who ranged from 6 to 9 years old, spent four hours a day, five days a week in the program, for six weeks. Before and after the program, their brains were scanned with MRI and they were given some commonly used tests of reading proficiency.

In tests taken before the program started, children from higher and lower socioeconomic (SES) backgrounds fared equally poorly in most areas, with one exception. Children from higher SES backgrounds had higher vocabulary scores, which has also been seen in studies comparing nondyslexic readers from different SES backgrounds.

“There’s a strong trend in these studies that higher SES families tend to talk more with their kids and also use more complex and diverse language. That tends to be where the vocabulary correlation comes from,” Romeo says.

The researchers also found differences in brain anatomy before the reading program started. Children from higher socioeconomic backgrounds had thicker cortex in a part of the brain known as Broca’s area, which is necessary for language production and comprehension. The researchers also found that these differences could account for the differences in vocabulary levels between the two groups.

Based on a limited number of previous studies, the researchers hypothesized that the reading program would have more of an impact on the students from higher socioeconomic backgrounds. But in fact, they found the opposite. About half of the students improved their scores, while the other half worsened or stayed the same. When analyzing the data for possible explanations, family income level was the one factor that proved significant.

“Socioeconomic status just showed up as the piece that was most predictive of treatment response,” Romeo says.

The same children whose reading scores improved also displayed changes in their brain anatomy. Specifically, the researchers found that they had a thickening of the cortex in a part of the brain known as the temporal occipital region, which comprises a large network of structures involved in reading.

“Mix of causes”

The researchers believe that their results may have been different than previous studies of reading intervention in low SES students because their program was run during the summer, rather than during the school year.

“Summer is when socioeconomic status takes its biggest toll. Low SES kids typically have less academic content in their summer activities compared to high SES, and that results in a slump in their skills,” Romeo says. “This may have been particularly beneficial for them because it may have been out of the realm of their typical summer.”

The researchers also hypothesize that reading difficulties may arise in slightly different ways among children of different SES backgrounds.

“There could be a different mix of causes,” Gabrieli says. “Reading is a complicated skill, so there could be a number of different factors that would make you do better or do worse. It could be that those factors are a little bit different in children with more enriched or less enriched environments.”

The researchers are hoping to identify more precisely the factors related to socioeconomic status, other environmental factors, or genetic components that could predict which types of reading interventions will be successful for individual students.

“In medicine, people call it personalized medicine: this idea that some people will really benefit from one intervention and not so much from another,” Gabrieli says. “We’re interested in understanding the match between the student and the kind of educational support that would be helpful for that particular student.”

The research was funded by the Ellison Medical Foundation, the Halis Family Foundation, Lindamood-Bell Learning Processes, and the National Institutes of Health.

Rethinking mental illness treatment

McGovern researchers are finding neural markers that could help improve treatment for psychiatric patients.

Ten years ago, Jim and Pat Poitras committed $20M to the McGovern Institute to establish the Poitras Center for Affective Disorders Research. The Poitras family had been longtime supporters of MIT, and because they had seen mental illness in their own family, they decided to support an ambitious new program at the McGovern Institute, with the goal of understanding the fundamental biological basis of depression, bipolar disorder, schizophrenia and other major psychiatric disorders.

The gift came at an opportune time, as the field was entering a new phase of discovery, with rapid advances in psychiatric genomics and brain imaging, and with the emergence of new technologies for genome editing and for the development of animal models. Over the past ten years, the Poitras Center has supported work in each of these areas, including Feng Zhang’s work on CRISPR-based genome editing, and Guoping Feng’s work on animal models for autism, schizophrenia and other psychiatric disorders.

This reflects a long-term strategy, says Robert Desimone, director of the McGovern Institute who oversees the Poitras Center. “But we must not lose sight of the overall goal, which is to benefit human patients. Insights from animal models and genomic medicine have the potential to transform the treatments of the future, but we are also interested in the nearer term, and in what we can do right now.”

One area where technology can have a near-term impact is human brain imaging, and in collaboration with clinical researchers at McLean Hospital, Massachusetts General Hospital and other institutions, the Poitras Center has supported an ambitious program to bring human neuroimaging closer to the clinic.

Discovering psychiatry’s crystal ball

A fundamental problem in psychiatry is that there are no biological markers for diagnosing mental illness or for indicating how best to treat it. Treatment decisions are based entirely on symptoms, and doctors and their patients will typically try one treatment, then if it does not work, try another, and perhaps another. The success rates for the first treatments are often less than 50%, and finding what works for an individual patient often means a long and painful process of trial and error.

“Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Satra Ghosh.

McGovern research scientist Susan Whitfield-Gabrieli and her colleagues are hoping to change this picture, with the help of brain imaging. Their findings suggest that brain scans can hold valuable information for psychiatrists and their patients. “We need a paradigm shift in how we use imaging. It can be used for more than research,” says Whitfield-Gabrieli, who is a member of McGovern Investigator John Gabrieli’s lab. “It would be a really big boost to be able use it to personalize psychiatric medicine.”

One of Whitfield-Gabrieli’s goals is to find markers that can predict which treatments will work for which patients. Another is to find markers that can predict the likely risk of disease in the future, allowing doctors to intervene before symptoms first develop. All of these markers need further validation before they are ready for the clinic, but they have the potential to meet a dire need to improve treatment for psychiatric disease.

A brain at rest

For Whitfield-Gabrieli, who both collaborates with and is married to Gabrieli, that paradigm shift began when she started to study the resting brain using functional magnetic resonance imaging (fMRI). Most brain imaging studies require the subject to perform a mental task in the scanner, but these are time-consuming and often hard to replicate in a clinical setting.In contrast, resting state imaging requires no task. The subject simply lies in the scanner and lets the mind wander. The patterns of activity can reveal functional connections within the brain, and are reliably consistent from study to study.

Whitfield-Gabrieli thought resting state scanning had the potential to help patients because it is simple and easy to perform.

“Even a 5-minute scan can contain useful information that could help people,” says Satrajit Ghosh, a principal research scientist in the Gabrieli lab who works closely with Whitfield-Gabrieli.

Whitfield-Gabrieli and her clinical collaborator Larry Seidman at Harvard Medical School decided to study resting state activity in patients with schizophrenia. They found a pattern of activity strikingly different from that of typical brains. The patients showed unusually strong activity in a set of interconnected brain regions known as the default mode network, which is typically activated during introspection. It is normally suppressed when a person attends to the outside world, but schizophrenia patients failed to show this suppression.

“The patient isn’t able to toggle between internal processing and external processing the way a typical individual can,” says Whitfield-Gabrieli, whose work is supported by the Poitras Center for Affective Disorders Research.

Since then, the team has observed similar disturbances in the default network in other disorders, including depression, anxiety, bipolar disorder, and ADHD. “We knew we were onto something interesting,” says Whitfield-Gabrieli. “But we kept coming back to the question: how can brain imaging help patients?”

fMRI on patients

Many imaging studies aim to understand the biological basis of disease and ultimately to guide the development of new drugs or other treatments. But this is a long-term goal, and Whitfield-Gabrieli wanted to find ways that brain imaging could have a more immediate impact. So she and Ghosh decided to use fMRI to look at differences among individual patients, and to focus on differences in how they responded to treatment.

“It gave us something objective to measure,” explains Ghosh. “Someone goes through a treatment, and they either get better or they don’t.” The project also had appeal for Ghosh because it was an opportunity for him to use his expertise in machine learning and other computational tools to build systems-level models of the brain.

For the first study, the team decided to focus on social anxiety disorder (SAD), which is typically treated with either prescription drugs or cognitive behavioral therapy (CBT). Both are moderately effective, but many patients do not respond to the first treatment they try.

The team began with a small study to test whether scans performed before the onset of treatment could predict who would respond best to the treatment. Working with Stefan Hofmann, a clinical psychologist at Boston University, they scanned 38 SAD patients before they began a 12-week course of CBT. At the end of their treatment, the patients were evaluated for clinical improvement, and the researchers examined the scans for patterns of activity that correlated with the improvement. The results were very encouraging; it turned out that predictions based on scan data were 5-fold better than the existing methods based on severity of symptoms at the time of diagnosis.

The researchers then turned to another condition, ADHD, which presents a similar clinical challenge, in that commonly used drugs—such as Adderall or Ritalin—work well, but not for everyone. So the McGovern team began a collaboration with psychiatrist Joseph Biederman, Chief of Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD
at Massachusetts General Hospital, on a similar study, looking for markers of treatment response.

The study is still ongoing, and it will be some time before results emerge, but the researchers are optimistic. “If we could predict who would respond to which treatment and avoid months of trial and error, it would be totally transformative for ADHD,” says Biederman.

Another goal is to predict in advance who is likely to develop a given disease in the future. The researchers have scanned children who have close relatives with schizophrenia or depression, and who are therefore at increased risk of developing these disorders themselves. Surprisingly, the children show patterns of resting state connectivity similar to those of patients.

“I was really intrigued by this,” says Whitfield-Gabrieli. “Even though these children are not sick, they have the same profile as adults who are.”

Whitfield-Gabrieli and Seidman are now expanding their study through a collaboration with clinical researchers at the Shanghai Mental Institute in China, who plan to image and then follow 225 people who are showing early risk signs for schizophrenia. They hope to find markers that predict who will develop the disease and who will not.

“While there are no drugs available to prevent schizophrenia, it may be possible to reduce the risk or severity of the disorder through CBT, or through interventions that reduce stress and improve sleep and well-being,” says Whitfield-Gabrieli. “One likely key to success is early identification of those at highest risk. If we could diagnose early, we could do early interventions
and potentially prevent disorders.”

From association to prediction

The search for predictive markers represents a departure from traditional psychiatric imaging studies, in which a group of patients is compared with a control group of healthy subjects. Studies of this type can reveal average differences between the groups, which may provide clues to the underlying biology of the disease. But they don’t provide information about individual patients, and so they have not been incorporated into clinical practice.

The difference is critical for clinicians, says Biederman. “I treat individuals, not groups. To bring predictive scans to the clinic, we need to be sure the individual scan is informative for the person you are treating.”

To develop these predictions, Whitfield-Gabrieli and Ghosh must first use sophisticated computational methods such as ‘deep learning’ to identify patterns in their data and to build models that relate the patterns to the clinical outcomes. They must then show that these models can generalize beyond the original study population—for example, that predictions based on patients from Boston can be applied to patients from Shanghai. The eventual goal is a model that can analyze a previously unseen brain scan from any individual, and predict with high confidence whether that person will (for example) develop schizophrenia or respond successfully to a particular therapy.

Achieving this will be challenging, because it will require scanning and following large numbers of subjects from diverse demographic groups—thousands of people, not just tens or hundreds
as in most clinical studies. Collaborations with large hospitals, such as the one in Shanghai, can help. Whitfield-Gabrieli has also received funding to collect imaging, clinical, and behavioral
data from over 200 adolescents with depression and anxiety, as part of the National Institutes of Health’s Human Connectome effort. These data, collected in collaboration with clinicians at
McLean Hospital, MGH and Boston University, will be available not only for the Gabrieli team, but for researchers anywhere to analyze. This is important, because no one team or center can
do it alone, says Ghosh. “Data must be collected by many and shared by all.”

The ultimate goal is to study as many patients as possible now so that the tools can help many more later. “Someday, a person will be able to go to a hospital, get a brain scan, charge it to their insurance, and know that it helped the doctor select the best treatment,” says Ghosh. “We’re still far away from that. But that is what we want to work towards.”

Neuroscientists get a glimpse into the workings of the baby brain

In adults, certain regions of the brain’s visual cortex respond preferentially to specific types of input, such as faces or objects — but how and when those preferences arise has long puzzled neuroscientists.

One way to help answer that question is to study the brains of very young infants and compare them to adult brains. However, scanning the brains of awake babies in an MRI machine has proven difficult.

Now, neuroscientists at MIT have overcome that obstacle, adapting their MRI scanner to make it easier to scan infants’ brains as the babies watch movies featuring different types of visual input. Using these data, the team found that in some ways, the organization of infants’ brains is surprisingly similar to that of adults. Specifically, brain regions that respond to faces in adults do the same in babies, as do regions that respond to scenes.

“It suggests that there’s a stronger biological predisposition than I would have guessed for specific cortical regions to end up with specific functions,” says Rebecca Saxe, a professor of brain and cognitive sciences and member of MIT’s McGovern Institute for Brain Research.

Saxe is the senior author of the study, which appears in the Jan. 10 issue of Nature Communications. The paper’s lead author is former MIT graduate student Ben Deen, who is now a postdoc at Rockefeller University.

MRI adaptations

Functional MRI (magnetic resonance imaging) is the go-to technique for studying brain function in adults. However, very few researchers have taken on the challenge of trying to scan babies’ brains, especially while they are awake.

“Babies and MRI machines have very different needs,” Saxe points out. “Babies would like to do activities for two or three minutes and then move on. They would like to be sitting in a comfortable position, and in charge of what they’re looking at.”

On the other hand, “MRI machines would like to be loud and dark and have a person show up on schedule, stay still for the entire time, pay attention to one thing for two hours, and follow instructions closely,” she says.

To make the setup more comfortable for babies, the researchers made several modifications to the MRI machine and to their usual experimental protocols. First, they built a special coil (part of the MRI scanner that acts as a radio antenna) that allows the baby to recline in a seat similar to a car seat. A mirror in front of the baby’s face allows him or her to watch videos, and there is space in the machine for a parent or one of the researchers to sit with the baby.

The researchers also made the scanner much less noisy than a typical MRI machine. “It’s quieter than a loud restaurant,” Saxe says. “The baby can hear their parent talking over the sound of the scanner.”

Once the babies, who were 4 to 6 months old, were in the scanner, the researchers played the movies continuously while scanning the babies’ brains. However, they only used data from the time periods when the babies were actively watching the movies. From 26 hours of scanning 17 babies, the researchers obtained four hours of usable data from nine babies.

“The sheer tenacity of this work is truly amazing,” says Charles Nelson, a professor of pediatrics at Boston Children’s Hospital, who was not involved in the research. “The fact that they pulled this off is incredibly novel.”

Obtaining this data allowed the MIT team to study how infants’ brains respond to specific types of sensory input, and to compare their responses with those of adults.

“The big-picture question is, how does the adult brain come to have the structure and function that you see in adulthood? How does it get like that?” Saxe says. “A lot of the answer to that question will depend on having the tools to be able to see the baby brain in action. The more we can see, the more we can ask that kind of question.”

Distinct preferences

The researchers showed the babies videos of either smiling children or outdoor scenes such as a suburban street seen from a moving car. Distinguishing social scenes from the physical environment is one of the main high-level divisions that our brains make when interpreting the world.

“The questions we’re asking are about how you understand and organize your world, with vision as the main modality for getting you into these very different mindsets,” Saxe says. “In adults, there are brain regions that prefer to look at faces and socially relevant things, and brain regions that prefer to look at environments and objects.”

The scans revealed that many regions of the babies’ visual cortex showed the same preferences for scenes or faces seen in adult brains. This suggests that these preferences form within the first few months of life and refutes the hypothesis that it takes years of experience interpreting the world for the brain to develop the responses that it shows in adulthood.

The researchers also found some differences in the way that babies’ brains respond to visual stimuli. One is that they do not seem to have regions found in the adult brain that are “highly selective,” meaning these regions prefer features such as human faces over any other kind of input, including human bodies or the faces of other animals. The babies also showed some differences in their responses when shown examples from four different categories — not just faces and scenes but also bodies and objects.

“We believe that the adult-like organization of infant visual cortex provides a scaffolding that guides the subsequent refinement of responses via experience, ultimately leading to the strongly specialized regions observed in adults,” Deen says.

Saxe and colleagues now hope to try to scan more babies between the ages of 3 and 8 months so they can get a better idea of how these vision-processing regions change over the first several months of life. They also hope to study even younger babies to help them discover when these distinctive brain responses first appear.

Distinctive brain pattern may underlie dyslexia

A distinctive neural signature found in the brains of people with dyslexia may explain why these individuals have difficulty learning to read, according to a new study from MIT neuroscientists.

The researchers discovered that in people with dyslexia, the brain has a diminished ability to acclimate to a repeated input — a trait known as neural adaptation. For example, when dyslexic students see the same word repeatedly, brain regions involved in reading do not show the same adaptation seen in typical readers.

This suggests that the brain’s plasticity, which underpins its ability to learn new things, is reduced, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

“It’s a difference in the brain that’s not about reading per se, but it’s a difference in perceptual learning that’s pretty broad,” says Gabrieli, who is the study’s senior author. “This is a path by which a brain difference could influence learning to read, which involves so many demands on plasticity.”

Former MIT graduate student Tyler Perrachione, who is now an assistant professor at Boston University, is the lead author of the study, which appears in the Dec. 21 issue of Neuron.

Reduced plasticity

The MIT team used magnetic resonance imaging (MRI) to scan the brains of young adults with and without reading difficulties as they performed a variety of tasks. In the first experiment, the subjects listened to a series of words read by either four different speakers or a single speaker.

The MRI scans revealed distinctive patterns of activity in each group of subjects. In nondyslexic people, areas of the brain that are involved in language showed neural adaption after hearing words said by the same speaker, but not when different speakers said the words. However, the dyslexic subjects showed much less adaptation to hearing words said by a single speaker.

Neurons that respond to a particular sensory input usually react strongly at first, but their response becomes muted as the input continues. This neural adaptation reflects chemical changes in neurons that make it easier for them to respond to a familiar stimulus, Gabrieli says. This phenomenon, known as plasticity, is key to learning new skills.

“You learn something upon the initial presentation that makes you better able to do it the second time, and the ease is marked by reduced neural activity,” Gabrieli says. “Because you’ve done something before, it’s easier to do it again.”

The researchers then ran a series of experiments to test how broad this effect might be. They asked subjects to look at series of the same word or different words; pictures of the same object or different objects; and pictures of the same face or different faces. In each case, they found that in people with dyslexia, brain regions devoted to interpreting words, objects, and faces, respectively, did not show neural adaptation when the same stimuli were repeated multiple times.

“The brain location changed depending on the nature of the content that was being perceived, but the reduced adaptation was consistent across very different domains,” Gabrieli says.

He was surprised to see that this effect was so widespread, appearing even during tasks that have nothing to do with reading; people with dyslexia have no documented difficulties in recognizing objects or faces.

He hypothesizes that the impairment shows up primarily in reading because deciphering letters and mapping them to sounds is such a demanding cognitive task. “There are probably few tasks people undertake that require as much plasticity as reading,” Gabrieli says.

Early appearance

In their final experiment, the researchers tested first and second graders with and without reading difficulties, and they found the same disparity in neural adaptation.

“We got almost the identical reduction in plasticity, which suggests that this is occurring quite early in learning to read,” Gabrieli says. “It’s not a consequence of a different learning experience over the years in struggling to read.”

Gabrieli’s lab now plans to study younger children to see if these differences might be apparent even before children begin to learn to read. They also hope to use other types of brain measurements such as magnetoencephalography (MEG) to follow the time course of the neural adaptation more closely.

The research was funded by the Ellison Medical Foundation, the National Institutes of Health, and a National Science Foundation Graduate Research Fellowship.

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.”

Robertson is the lead author of the study, which appears in the Sept. 8 issue of the journal Current Biology. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s lead author.

Building memories

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson says.

The researchers explored this hypothesis using immersive virtual reality headsets, which allowed them to show people many different panoramic scenes. In this study, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighborhood. The images were presented in two ways: Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner. Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

Priming the brain

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.

“After you have seen a series of views of a panoramic environment, you have explicitly linked them in memory to a known place,” Robertson says. “They also evoke overlapping visual representations in certain regions of the brain, which is implicitly guiding your upcoming perceptual experience.”

The research was funded by the National Science Foundation Science and Technology Center for Brains, Minds, and Machines; and the Harvard Milton Fund.

Study finds brain connections key to learning

A new study from MIT reveals that a brain region dedicated to reading has connections for that skill even before children learn to read.

By scanning the brains of children before and after they learned to read, the researchers found that they could predict the precise location where each child’s visual word form area (VWFA) would develop, based on the connections of that region to other parts of the brain.

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago, which is not enough time for evolution to have reshaped the brain for that specific task. The new study suggests that the VWFA, located in an area that receives visual input, has pre-existing connections to brain regions associated with language processing, making it ideally suited to become devoted to reading.

“Long-range connections that allow this region to talk to other areas of the brain seem to drive function,” says Zeynep Saygin, a postdoc at MIT’s McGovern Institute for Brain Research. “As far as we can tell, within this larger fusiform region of the brain, only the reading area has these particular sets of connections, and that’s how it’s distinguished from adjacent cortex.”

Saygin is the lead author of the study, which appears in the Aug. 8 issue of Nature Neuroscience. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s senior author.

Specialized for reading

The brain’s cortex, where most cognitive functions occur, has areas specialized for reading as well as face recognition, language comprehension, and many other tasks. Neuroscientists have hypothesized that the locations of these functions may be determined by prewired connections to other parts of the brain, but they have had few good opportunities to test this hypothesis.

Reading presents a unique opportunity to study this question because it is not learned right away, giving scientists a chance to examine the brain region that will become the VWFA before children know how to read. This region, located in the fusiform gyrus, at the base of the brain, is responsible for recognizing strings of letters.

Children participating in the study were scanned twice — at 5 years of age, before learning to read, and at 8 years, after they learned to read. In the scans at age 8, the researchers precisely defined the VWFA for each child by using functional magnetic resonance imaging (fMRI) to measure brain activity as the children read. They also used a technique called diffusion-weighted imaging to trace the connections between the VWFA and other parts of the brain.

The researchers saw no indication from fMRI scans that the VWFA was responding to words at age 5. However, the region that would become the VWFA was already different from adjacent cortex in its connectivity patterns. These patterns were so distinctive that they could be used to accurately predict the precise location where each child’s VWFA would later develop.

Although the area that will become the VWFA does not respond preferentially to letters at age 5, Saygin says it is likely that the region is involved in some kind of high-level object recognition before it gets taken over for word recognition as a child learns to read. Still unknown is how and why the brain forms those connections early in life.

Pre-existing connections

Kanwisher and Saygin have found that the VWFA is connected to language regions of the brain in adults, but the new findings in children offer strong evidence that those connections exist before reading is learned, and are not the result of learning to read, according to Stanislas Dehaene, a professor and the chair of experimental cognitive psychology at the College de France, who wrote a commentary on the paper for Nature Neuroscience.

“To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read,” wrote Dehaene, who was not involved in the study. “Although many children, at the age of 5, did not have a VWFA yet, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.”

The MIT team now plans to study whether this kind of brain imaging could help identify children who are at risk of developing dyslexia and other reading difficulties.

“It’s really powerful to be able to predict functional development three years ahead of time,” Saygin says. “This could be a way to use neuroimaging to try to actually help individuals even before any problems occur.”

Diagnosing depression before it starts

A new brain imaging study from MIT and Harvard Medical School may lead to a screen that could identify children at high risk of developing depression later in life.

In the study, the researchers found distinctive brain differences in children known to be at high risk because of family history of depression. The finding suggests that this type of scan could be used to identify children whose risk was previously unknown, allowing them to undergo treatment before developing depression, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“We’d like to develop the tools to be able to identify people at true risk, independent of why they got there, with the ultimate goal of maybe intervening early and not waiting for depression to strike the person,” says Gabrieli, an author of the study, which appears in the journal Biological Psychiatry.

Early intervention is important because once a person suffers from an episode of depression, they become more likely to have another. “If you can avoid that first bout, maybe it would put the person on a different trajectory,” says Gabrieli, who is a member of MIT’s McGovern Institute for Brain Research.

The paper’s lead author is McGovern Institute postdoc Xiaoqian Chai, and the senior author is Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute.

Distinctive patterns

The study also helps to answer a key question about the brain structures of depressed patients. Previous imaging studies have revealed two brain regions that often show abnormal activity in these patients: the subgenual anterior cingulate cortex (sgACC) and the amygdala. However, it was unclear if those differences caused depression or if the brain changed as the result of a depressive episode.

To address that issue, the researchers decided to scan brains of children who were not depressed, according to their scores on a commonly used diagnostic questionnaire, but had a parent who had suffered from the disorder. Such children are three times more likely to become depressed later in life, usually between the ages of 15 and 30.

Gabrieli and colleagues studied 27 high-risk children, ranging in age from eight to 14, and compared them with a group of 16 children with no known family history of depression.

Using functional magnetic resonance imaging (fMRI), the researchers measured synchronization of activity between different brain regions. Synchronization patterns that emerge when a person is not performing any particular task allow scientists to determine which regions naturally communicate with each other.

The researchers identified several distinctive patterns in the at-risk children. The strongest of these links was between the sgACC and the default mode network — a set of brain regions that is most active when the mind is unfocused. This abnormally high synchronization has also been seen in the brains of depressed adults.

The researchers also found hyperactive connections between the amygdala, which is important for processing emotion, and the inferior frontal gyrus, which is involved in language processing. Within areas of the frontal and parietal cortex, which are important for thinking and decision-making, they found lower than normal connectivity.

Cause and effect

These patterns are strikingly similar to those found in depressed adults, suggesting that these differences arise before depression occurs and may contribute to the development of the disorder, says Ian Gotlib, a professor of psychology at Stanford University.

“The findings are consistent with an explanation that this is contributing to the onset of the disease,” says Gotlib, who was not involved in the research. “The patterns are there before the depressive episode and are not due to the disorder.”

The MIT team is continuing to track the at-risk children and plans to investigate whether early treatment might prevent episodes of depression. They also hope to study how some children who are at high risk manage to avoid the disorder without treatment.

Other authors of the paper are Dina Hirshfeld-Becker, an associate professor of psychiatry at Harvard Medical School; Joseph Biederman, director of pediatric psychopharmacology at Massachusetts General Hospital (MGH); Mai Uchida, an assistant professor of psychiatry at Harvard Medical School; former MIT postdoc Oliver Doehrmann; MIT graduate student Julia Leonard; John Salvatore, a former McGovern technical assistant; MGH research assistants Tara Kenworthy and Elana Kagan; Harvard Medical School postdoc Ariel Brown; and former MIT technical assistant Carlo de los Angeles.

Music in the brain

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.

 

Young brains can take on new functions

In 2011, MIT neuroscientist Rebecca Saxe and colleagues reported that in blind adults, brain regions normally dedicated to vision processing instead participate in language tasks such as speech and comprehension. Now, in a study of blind children, Saxe’s lab has found that this transformation occurs very early in life, before the age of 4.

The study, appearing in the Journal of Neuroscience, suggests that the brains of young children are highly plastic, meaning that regions usually specialized for one task can adapt to new and very different roles. The findings also help to define the extent to which this type of remodeling is possible.

“In some circumstances, patches of cortex appear to take on other roles than the ones that they most typically have,” says Saxe, a professor of cognitive neuroscience and an associate member of MIT’s McGovern Institute for Brain Research. “One question that arises from that is, ‘What is the range of possible differences between what a cortical region typically does and what it could possibly do?’”

The paper’s lead author is Marina Bedny, a former MIT postdoc who is now an assistant professor at Johns Hopkins University. MIT graduate student Hilary Richardson is also an author of the paper.

Brain reorganization

The brain’s cortex, which carries out high-level functions such as thought, sensory processing, and initiation of movement, is made of sheets of neurons, each dedicated to a certain role. Within the visual system, located primarily in the occipital lobe, most neurons are tuned to respond only to a very specific aspect of visual input, such as brightness, orientation, or location in the field of view.

“There’s this big fundamental question, which is, ‘How did that organization get there, and to what degree can it be changed?’” Saxe says.

One possibility is that neurons in each patch of cortex have evolved to carry out specific roles, and can do nothing else. At the other extreme is the possibility that any patch of cortex can be recruited to perform any kind of computational task.

“The reality is somewhere in between those two,” Saxe says.

To study the extent to which cortex can change its function, scientists have focused on the visual cortex because they can learn a great deal about it by studying people who were born blind.

A landmark 1996 study of blind people found that their visual regions could participate in a nonvisual task — reading Braille. Some scientists theorized that perhaps the visual cortex is recruited for reading Braille because like vision, it requires discriminating very fine-grained patterns.

However, in their 2011 study, Saxe and Bedny found that the visual cortex of blind adults also responds to spoken language. “That was weird, because processing auditory language doesn’t require the kind of fine-grained spatial discrimination that Braille does,” Saxe says.

She and Bedny hypothesized that auditory language processing may develop in the occipital cortex by piggybacking onto the Braille-reading function. To test that idea, they began studying congenitally blind children, including some who had not learned Braille yet. They reasoned that if their hypothesis were correct, the occipital lobe would be gradually recruited for language processing as the children learned Braille.

However, they found that this was not the case. Instead, children as young as 4 already have language-related activity in the occipital lobe.

“The response of occipital cortex to language is not affected by Braille acquisition,” Saxe says. “It happens before Braille and it doesn’t increase with Braille.”

Language-related occipital activity was similar among all of the 19 blind children, who ranged in age from 4 to 17, suggesting that the entire process of occipital recruitment for language processing takes place before the age of 4, Saxe says. Bedny and Saxe have previously shown that this transition occurs only in people blind from birth, suggesting that there is an early critical period after which the cortex loses much of its plasticity.

The new study represents a huge step forward in understanding how the occipital cortex can take on new functions, says Ione Fine, an associate professor of psychology at the University of Washington.

“One thing that has been missing is an understanding of the developmental timeline,” says Fine, who was not involved in the research. “The insight here is that you get plasticity for language separate from plasticity for Braille and separate from plasticity for auditory processing.”

Language skills

The findings raise the question of how the extra language-processing centers in the occipital lobe affect language skills.

“This is a question we’ve always wondered about,” Saxe says. “Does it mean you’re better at those functions because you have more of your cortex doing it? Does it mean you’re more resilient in those functions because now you have more redundancy in your mechanism for doing it? You could even imagine the opposite: Maybe you’re less good at those functions because they’re distributed in an inefficient or atypical way.”

There are hints that the occipital lobe’s contribution to language-related functions “takes the pressure off the frontal cortex,” where language processing normally occurs, Saxe says. Other researchers have shown that suppressing left frontal cortex activity with transcranial magnetic stimulation interferes with language function in sighted people, but not in the congenitally blind.

This leads to the intriguing prediction that a congenitally blind person who suffers a stroke in the left frontal cortex may retain much more language ability than a sighted person would, Saxe says, although that hypothesis has not been tested.

Saxe’s lab is now studying children under 4 to try to learn more about how cortical functions develop early in life, while Bedny is investigating whether the occipital lobe participates in functions other than language in congenitally blind people.

Study links brain anatomy, academic achievement, and family income

Many years of research have shown that for students from lower-income families, standardized test scores and other measures of academic success tend to lag behind those of wealthier students.

A new study led by researchers at MIT and Harvard University offers another dimension to this so-called “achievement gap”: After imaging the brains of high- and low-income students, they found that the higher-income students had thicker brain cortex in areas associated with visual perception and knowledge accumulation. Furthermore, these differences also correlated with one measure of academic achievement — performance on standardized tests.

“Just as you would expect, there’s a real cost to not living in a supportive environment. We can see it not only in test scores, in educational attainment, but within the brains of these children,” says MIT’s John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, professor of brain and cognitive sciences, and one of the study’s authors. “To me, it’s a call to action. You want to boost the opportunities for those for whom it doesn’t come easily in their environment.”

This study did not explore possible reasons for these differences in brain anatomy. However, previous studies have shown that lower-income students are more likely to suffer from stress in early childhood, have more limited access to educational resources, and receive less exposure to spoken language early in life. These factors have all been linked to lower academic achievement.

In recent years, the achievement gap in the United States between high- and low-income students has widened, even as gaps along lines of race and ethnicity have narrowed, says Martin West, an associate professor of education at the Harvard Graduate School of Education and an author of the new study.

“The gap in student achievement, as measured by test scores between low-income and high-income students, is a pervasive and longstanding phenomenon in American education, and indeed in education systems around the world,” he says. “There’s a lot of interest among educators and policymakers in trying to understand the sources of those achievement gaps, but even more interest in possible strategies to address them.”

Allyson Mackey, a postdoc at MIT’s McGovern Institute for Brain Research, is the lead author of the paper, which appears the journal Psychological Science. Other authors are postdoc Amy Finn; graduate student Julia Leonard; Drew Jacoby-Senghor, a postdoc at Columbia Business School; and Christopher Gabrieli, chair of the nonprofit Transforming Education.

Explaining the gap

The study included 58 students — 23 from lower-income families and 35 from higher-income families, all aged 12 or 13. Low-income students were defined as those who qualify for a free or reduced-price school lunch.

The researchers compared students’ scores on the Massachusetts Comprehensive Assessment System (MCAS) with brain scans of a region known as the cortex, which is key to functions such as thought, language, sensory perception, and motor command.

Using magnetic resonance imaging (MRI), they discovered differences in the thickness of parts of the cortex in the temporal and occipital lobes, whose primary roles are in vision and storing knowledge. Those differences correlated to differences in both test scores and family income. In fact, differences in cortical thickness in these brain regions could explain as much as 44 percent of the income achievement gap found in this study.

Previous studies have also shown brain anatomy differences associated with income, but did not link those differences to academic achievement.

“A number of labs have reported differences in children’s brain structures as a function of family income, but this is the first to relate that to variation in academic achievement,” says Kimberly Noble, an assistant professor of pediatrics at Columbia University who was not part of the research team.

In most other measures of brain anatomy, the researchers found no significant differences. The amount of white matter — the bundles of axons that connect different parts of the brain — did not differ, nor did the overall surface area of the brain cortex.

The researchers point out that the structural differences they did find are not necessarily permanent. “There’s so much strong evidence that brains are highly plastic,” says Gabrieli, who is also a member of the McGovern Institute. “Our findings don’t mean that further educational support, home support, all those things, couldn’t make big differences.”

In a follow-up study, the researchers hope to learn more about what types of educational programs might help to close the achievement gap, and if possible, investigate whether these interventions also influence brain anatomy.

“Over the past decade we’ve been able to identify a growing number of educational interventions that have managed to have notable impacts on students’ academic achievement as measured by standardized tests,” West says. “What we don’t know anything about is the extent to which those interventions — whether it be attending a very high-performing charter school, or being assigned to a particularly effective teacher, or being exposed to a high-quality curricular program — improves test scores by altering some of the differences in brain structure that we’ve documented, or whether they had those effects by other means.”

The research was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.