Neuroscientists get a glimpse into the workings of the baby brain

In adults, certain regions of the brain’s visual cortex respond preferentially to specific types of input, such as faces or objects — but how and when those preferences arise has long puzzled neuroscientists.

One way to help answer that question is to study the brains of very young infants and compare them to adult brains. However, scanning the brains of awake babies in an MRI machine has proven difficult.

Now, neuroscientists at MIT have overcome that obstacle, adapting their MRI scanner to make it easier to scan infants’ brains as the babies watch movies featuring different types of visual input. Using these data, the team found that in some ways, the organization of infants’ brains is surprisingly similar to that of adults. Specifically, brain regions that respond to faces in adults do the same in babies, as do regions that respond to scenes.

“It suggests that there’s a stronger biological predisposition than I would have guessed for specific cortical regions to end up with specific functions,” says Rebecca Saxe, a professor of brain and cognitive sciences and member of MIT’s McGovern Institute for Brain Research.

Saxe is the senior author of the study, which appears in the Jan. 10 issue of Nature Communications. The paper’s lead author is former MIT graduate student Ben Deen, who is now a postdoc at Rockefeller University.

MRI adaptations

Functional MRI (magnetic resonance imaging) is the go-to technique for studying brain function in adults. However, very few researchers have taken on the challenge of trying to scan babies’ brains, especially while they are awake.

“Babies and MRI machines have very different needs,” Saxe points out. “Babies would like to do activities for two or three minutes and then move on. They would like to be sitting in a comfortable position, and in charge of what they’re looking at.”

On the other hand, “MRI machines would like to be loud and dark and have a person show up on schedule, stay still for the entire time, pay attention to one thing for two hours, and follow instructions closely,” she says.

To make the setup more comfortable for babies, the researchers made several modifications to the MRI machine and to their usual experimental protocols. First, they built a special coil (part of the MRI scanner that acts as a radio antenna) that allows the baby to recline in a seat similar to a car seat. A mirror in front of the baby’s face allows him or her to watch videos, and there is space in the machine for a parent or one of the researchers to sit with the baby.

The researchers also made the scanner much less noisy than a typical MRI machine. “It’s quieter than a loud restaurant,” Saxe says. “The baby can hear their parent talking over the sound of the scanner.”

Once the babies, who were 4 to 6 months old, were in the scanner, the researchers played the movies continuously while scanning the babies’ brains. However, they only used data from the time periods when the babies were actively watching the movies. From 26 hours of scanning 17 babies, the researchers obtained four hours of usable data from nine babies.

“The sheer tenacity of this work is truly amazing,” says Charles Nelson, a professor of pediatrics at Boston Children’s Hospital, who was not involved in the research. “The fact that they pulled this off is incredibly novel.”

Obtaining this data allowed the MIT team to study how infants’ brains respond to specific types of sensory input, and to compare their responses with those of adults.

“The big-picture question is, how does the adult brain come to have the structure and function that you see in adulthood? How does it get like that?” Saxe says. “A lot of the answer to that question will depend on having the tools to be able to see the baby brain in action. The more we can see, the more we can ask that kind of question.”

Distinct preferences

The researchers showed the babies videos of either smiling children or outdoor scenes such as a suburban street seen from a moving car. Distinguishing social scenes from the physical environment is one of the main high-level divisions that our brains make when interpreting the world.

“The questions we’re asking are about how you understand and organize your world, with vision as the main modality for getting you into these very different mindsets,” Saxe says. “In adults, there are brain regions that prefer to look at faces and socially relevant things, and brain regions that prefer to look at environments and objects.”

The scans revealed that many regions of the babies’ visual cortex showed the same preferences for scenes or faces seen in adult brains. This suggests that these preferences form within the first few months of life and refutes the hypothesis that it takes years of experience interpreting the world for the brain to develop the responses that it shows in adulthood.

The researchers also found some differences in the way that babies’ brains respond to visual stimuli. One is that they do not seem to have regions found in the adult brain that are “highly selective,” meaning these regions prefer features such as human faces over any other kind of input, including human bodies or the faces of other animals. The babies also showed some differences in their responses when shown examples from four different categories — not just faces and scenes but also bodies and objects.

“We believe that the adult-like organization of infant visual cortex provides a scaffolding that guides the subsequent refinement of responses via experience, ultimately leading to the strongly specialized regions observed in adults,” Deen says.

Saxe and colleagues now hope to try to scan more babies between the ages of 3 and 8 months so they can get a better idea of how these vision-processing regions change over the first several months of life. They also hope to study even younger babies to help them discover when these distinctive brain responses first appear.

Distinctive brain pattern may underlie dyslexia

A distinctive neural signature found in the brains of people with dyslexia may explain why these individuals have difficulty learning to read, according to a new study from MIT neuroscientists.

The researchers discovered that in people with dyslexia, the brain has a diminished ability to acclimate to a repeated input — a trait known as neural adaptation. For example, when dyslexic students see the same word repeatedly, brain regions involved in reading do not show the same adaptation seen in typical readers.

This suggests that the brain’s plasticity, which underpins its ability to learn new things, is reduced, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

“It’s a difference in the brain that’s not about reading per se, but it’s a difference in perceptual learning that’s pretty broad,” says Gabrieli, who is the study’s senior author. “This is a path by which a brain difference could influence learning to read, which involves so many demands on plasticity.”

Former MIT graduate student Tyler Perrachione, who is now an assistant professor at Boston University, is the lead author of the study, which appears in the Dec. 21 issue of Neuron.

Reduced plasticity

The MIT team used magnetic resonance imaging (MRI) to scan the brains of young adults with and without reading difficulties as they performed a variety of tasks. In the first experiment, the subjects listened to a series of words read by either four different speakers or a single speaker.

The MRI scans revealed distinctive patterns of activity in each group of subjects. In nondyslexic people, areas of the brain that are involved in language showed neural adaption after hearing words said by the same speaker, but not when different speakers said the words. However, the dyslexic subjects showed much less adaptation to hearing words said by a single speaker.

Neurons that respond to a particular sensory input usually react strongly at first, but their response becomes muted as the input continues. This neural adaptation reflects chemical changes in neurons that make it easier for them to respond to a familiar stimulus, Gabrieli says. This phenomenon, known as plasticity, is key to learning new skills.

“You learn something upon the initial presentation that makes you better able to do it the second time, and the ease is marked by reduced neural activity,” Gabrieli says. “Because you’ve done something before, it’s easier to do it again.”

The researchers then ran a series of experiments to test how broad this effect might be. They asked subjects to look at series of the same word or different words; pictures of the same object or different objects; and pictures of the same face or different faces. In each case, they found that in people with dyslexia, brain regions devoted to interpreting words, objects, and faces, respectively, did not show neural adaptation when the same stimuli were repeated multiple times.

“The brain location changed depending on the nature of the content that was being perceived, but the reduced adaptation was consistent across very different domains,” Gabrieli says.

He was surprised to see that this effect was so widespread, appearing even during tasks that have nothing to do with reading; people with dyslexia have no documented difficulties in recognizing objects or faces.

He hypothesizes that the impairment shows up primarily in reading because deciphering letters and mapping them to sounds is such a demanding cognitive task. “There are probably few tasks people undertake that require as much plasticity as reading,” Gabrieli says.

Early appearance

In their final experiment, the researchers tested first and second graders with and without reading difficulties, and they found the same disparity in neural adaptation.

“We got almost the identical reduction in plasticity, which suggests that this is occurring quite early in learning to read,” Gabrieli says. “It’s not a consequence of a different learning experience over the years in struggling to read.”

Gabrieli’s lab now plans to study younger children to see if these differences might be apparent even before children begin to learn to read. They also hope to use other types of brain measurements such as magnetoencephalography (MEG) to follow the time course of the neural adaptation more closely.

The research was funded by the Ellison Medical Foundation, the National Institutes of Health, and a National Science Foundation Graduate Research Fellowship.

How the brain builds panoramic memory

When asked to visualize your childhood home, you can probably picture not only the house you lived in, but also the buildings next door and across the street. MIT neuroscientists have now identified two brain regions that are involved in creating these panoramic memories.

These brain regions help us to merge fleeting views of our surroundings into a seamless, 360-degree panorama, the researchers say.

“Our understanding of our environment is largely shaped by our memory for what’s currently out of sight,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “What we were looking for are hubs in the brain where your memories for the panoramic environment are integrated with your current field of view.”

Robertson is the lead author of the study, which appears in the Sept. 8 issue of the journal Current Biology. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s lead author.

Building memories

As we look at a scene, visual information flows from our retinas into the brain, which has regions that are responsible for processing different elements of what we see, such as faces or objects. The MIT team suspected that areas involved in processing scenes — the occipital place area (OPA), the retrosplenial complex (RSC), and parahippocampal place area (PPA) — might also be involved in generating panoramic memories of a place such as a street corner.

If this were true, when you saw two images of houses that you knew were across the street from each other, they would evoke similar patterns of activity in these specialized brain regions. Two houses from different streets would not induce similar patterns.

“Our hypothesis was that as we begin to build memory of the environment around us, there would be certain regions of the brain where the representation of a single image would start to overlap with representations of other views from the same scene,” Robertson says.

The researchers explored this hypothesis using immersive virtual reality headsets, which allowed them to show people many different panoramic scenes. In this study, the researchers showed participants images from 40 street corners in Boston’s Beacon Hill neighborhood. The images were presented in two ways: Half the time, participants saw a 100-degree stretch of a 360-degree scene, but the other half of the time, they saw two noncontinuous stretches of a 360-degree scene.

After showing participants these panoramic environments, the researchers then showed them 40 pairs of images and asked if they came from the same street corner. Participants were much better able to determine if pairs came from the same corner if they had seen the two scenes linked in the 100-degree image than if they had seen them unlinked.

Brain scans revealed that when participants saw two images that they knew were linked, the response patterns in the RSC and OPA regions were similar. However, this was not the case for image pairs that the participants had not seen as linked. This suggests that the RSC and OPA, but not the PPA, are involved in building panoramic memories of our surroundings, the researchers say.

Priming the brain

In another experiment, the researchers tested whether one image could “prime” the brain to recall an image from the same panoramic scene. To do this, they showed participants a scene and asked them whether it had been on their left or right when they first saw it. Before that, they showed them either another image from the same street corner or an unrelated image. Participants performed much better when primed with the related image.

“After you have seen a series of views of a panoramic environment, you have explicitly linked them in memory to a known place,” Robertson says. “They also evoke overlapping visual representations in certain regions of the brain, which is implicitly guiding your upcoming perceptual experience.”

The research was funded by the National Science Foundation Science and Technology Center for Brains, Minds, and Machines; and the Harvard Milton Fund.

Study finds brain connections key to learning

A new study from MIT reveals that a brain region dedicated to reading has connections for that skill even before children learn to read.

By scanning the brains of children before and after they learned to read, the researchers found that they could predict the precise location where each child’s visual word form area (VWFA) would develop, based on the connections of that region to other parts of the brain.

Neuroscientists have long wondered why the brain has a region exclusively dedicated to reading — a skill that is unique to humans and only developed about 5,400 years ago, which is not enough time for evolution to have reshaped the brain for that specific task. The new study suggests that the VWFA, located in an area that receives visual input, has pre-existing connections to brain regions associated with language processing, making it ideally suited to become devoted to reading.

“Long-range connections that allow this region to talk to other areas of the brain seem to drive function,” says Zeynep Saygin, a postdoc at MIT’s McGovern Institute for Brain Research. “As far as we can tell, within this larger fusiform region of the brain, only the reading area has these particular sets of connections, and that’s how it’s distinguished from adjacent cortex.”

Saygin is the lead author of the study, which appears in the Aug. 8 issue of Nature Neuroscience. Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute, is the paper’s senior author.

Specialized for reading

The brain’s cortex, where most cognitive functions occur, has areas specialized for reading as well as face recognition, language comprehension, and many other tasks. Neuroscientists have hypothesized that the locations of these functions may be determined by prewired connections to other parts of the brain, but they have had few good opportunities to test this hypothesis.

Reading presents a unique opportunity to study this question because it is not learned right away, giving scientists a chance to examine the brain region that will become the VWFA before children know how to read. This region, located in the fusiform gyrus, at the base of the brain, is responsible for recognizing strings of letters.

Children participating in the study were scanned twice — at 5 years of age, before learning to read, and at 8 years, after they learned to read. In the scans at age 8, the researchers precisely defined the VWFA for each child by using functional magnetic resonance imaging (fMRI) to measure brain activity as the children read. They also used a technique called diffusion-weighted imaging to trace the connections between the VWFA and other parts of the brain.

The researchers saw no indication from fMRI scans that the VWFA was responding to words at age 5. However, the region that would become the VWFA was already different from adjacent cortex in its connectivity patterns. These patterns were so distinctive that they could be used to accurately predict the precise location where each child’s VWFA would later develop.

Although the area that will become the VWFA does not respond preferentially to letters at age 5, Saygin says it is likely that the region is involved in some kind of high-level object recognition before it gets taken over for word recognition as a child learns to read. Still unknown is how and why the brain forms those connections early in life.

Pre-existing connections

Kanwisher and Saygin have found that the VWFA is connected to language regions of the brain in adults, but the new findings in children offer strong evidence that those connections exist before reading is learned, and are not the result of learning to read, according to Stanislas Dehaene, a professor and the chair of experimental cognitive psychology at the College de France, who wrote a commentary on the paper for Nature Neuroscience.

“To genuinely test the hypothesis that the VWFA owes its specialization to a pre-existing connectivity pattern, it was necessary to measure brain connectivity in children before they learned to read,” wrote Dehaene, who was not involved in the study. “Although many children, at the age of 5, did not have a VWFA yet, the connections that were already in place could be used to anticipate where the VWFA would appear once they learned to read.”

The MIT team now plans to study whether this kind of brain imaging could help identify children who are at risk of developing dyslexia and other reading difficulties.

“It’s really powerful to be able to predict functional development three years ahead of time,” Saygin says. “This could be a way to use neuroimaging to try to actually help individuals even before any problems occur.”

Diagnosing depression before it starts

A new brain imaging study from MIT and Harvard Medical School may lead to a screen that could identify children at high risk of developing depression later in life.

In the study, the researchers found distinctive brain differences in children known to be at high risk because of family history of depression. The finding suggests that this type of scan could be used to identify children whose risk was previously unknown, allowing them to undergo treatment before developing depression, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology and a professor of brain and cognitive sciences at MIT.

“We’d like to develop the tools to be able to identify people at true risk, independent of why they got there, with the ultimate goal of maybe intervening early and not waiting for depression to strike the person,” says Gabrieli, an author of the study, which appears in the journal Biological Psychiatry.

Early intervention is important because once a person suffers from an episode of depression, they become more likely to have another. “If you can avoid that first bout, maybe it would put the person on a different trajectory,” says Gabrieli, who is a member of MIT’s McGovern Institute for Brain Research.

The paper’s lead author is McGovern Institute postdoc Xiaoqian Chai, and the senior author is Susan Whitfield-Gabrieli, a research scientist at the McGovern Institute.

Distinctive patterns

The study also helps to answer a key question about the brain structures of depressed patients. Previous imaging studies have revealed two brain regions that often show abnormal activity in these patients: the subgenual anterior cingulate cortex (sgACC) and the amygdala. However, it was unclear if those differences caused depression or if the brain changed as the result of a depressive episode.

To address that issue, the researchers decided to scan brains of children who were not depressed, according to their scores on a commonly used diagnostic questionnaire, but had a parent who had suffered from the disorder. Such children are three times more likely to become depressed later in life, usually between the ages of 15 and 30.

Gabrieli and colleagues studied 27 high-risk children, ranging in age from eight to 14, and compared them with a group of 16 children with no known family history of depression.

Using functional magnetic resonance imaging (fMRI), the researchers measured synchronization of activity between different brain regions. Synchronization patterns that emerge when a person is not performing any particular task allow scientists to determine which regions naturally communicate with each other.

The researchers identified several distinctive patterns in the at-risk children. The strongest of these links was between the sgACC and the default mode network — a set of brain regions that is most active when the mind is unfocused. This abnormally high synchronization has also been seen in the brains of depressed adults.

The researchers also found hyperactive connections between the amygdala, which is important for processing emotion, and the inferior frontal gyrus, which is involved in language processing. Within areas of the frontal and parietal cortex, which are important for thinking and decision-making, they found lower than normal connectivity.

Cause and effect

These patterns are strikingly similar to those found in depressed adults, suggesting that these differences arise before depression occurs and may contribute to the development of the disorder, says Ian Gotlib, a professor of psychology at Stanford University.

“The findings are consistent with an explanation that this is contributing to the onset of the disease,” says Gotlib, who was not involved in the research. “The patterns are there before the depressive episode and are not due to the disorder.”

The MIT team is continuing to track the at-risk children and plans to investigate whether early treatment might prevent episodes of depression. They also hope to study how some children who are at high risk manage to avoid the disorder without treatment.

Other authors of the paper are Dina Hirshfeld-Becker, an associate professor of psychiatry at Harvard Medical School; Joseph Biederman, director of pediatric psychopharmacology at Massachusetts General Hospital (MGH); Mai Uchida, an assistant professor of psychiatry at Harvard Medical School; former MIT postdoc Oliver Doehrmann; MIT graduate student Julia Leonard; John Salvatore, a former McGovern technical assistant; MGH research assistants Tara Kenworthy and Elana Kagan; Harvard Medical School postdoc Ariel Brown; and former MIT technical assistant Carlo de los Angeles.

Music in the brain

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.

 

Young brains can take on new functions

In 2011, MIT neuroscientist Rebecca Saxe and colleagues reported that in blind adults, brain regions normally dedicated to vision processing instead participate in language tasks such as speech and comprehension. Now, in a study of blind children, Saxe’s lab has found that this transformation occurs very early in life, before the age of 4.

The study, appearing in the Journal of Neuroscience, suggests that the brains of young children are highly plastic, meaning that regions usually specialized for one task can adapt to new and very different roles. The findings also help to define the extent to which this type of remodeling is possible.

“In some circumstances, patches of cortex appear to take on other roles than the ones that they most typically have,” says Saxe, a professor of cognitive neuroscience and an associate member of MIT’s McGovern Institute for Brain Research. “One question that arises from that is, ‘What is the range of possible differences between what a cortical region typically does and what it could possibly do?’”

The paper’s lead author is Marina Bedny, a former MIT postdoc who is now an assistant professor at Johns Hopkins University. MIT graduate student Hilary Richardson is also an author of the paper.

Brain reorganization

The brain’s cortex, which carries out high-level functions such as thought, sensory processing, and initiation of movement, is made of sheets of neurons, each dedicated to a certain role. Within the visual system, located primarily in the occipital lobe, most neurons are tuned to respond only to a very specific aspect of visual input, such as brightness, orientation, or location in the field of view.

“There’s this big fundamental question, which is, ‘How did that organization get there, and to what degree can it be changed?’” Saxe says.

One possibility is that neurons in each patch of cortex have evolved to carry out specific roles, and can do nothing else. At the other extreme is the possibility that any patch of cortex can be recruited to perform any kind of computational task.

“The reality is somewhere in between those two,” Saxe says.

To study the extent to which cortex can change its function, scientists have focused on the visual cortex because they can learn a great deal about it by studying people who were born blind.

A landmark 1996 study of blind people found that their visual regions could participate in a nonvisual task — reading Braille. Some scientists theorized that perhaps the visual cortex is recruited for reading Braille because like vision, it requires discriminating very fine-grained patterns.

However, in their 2011 study, Saxe and Bedny found that the visual cortex of blind adults also responds to spoken language. “That was weird, because processing auditory language doesn’t require the kind of fine-grained spatial discrimination that Braille does,” Saxe says.

She and Bedny hypothesized that auditory language processing may develop in the occipital cortex by piggybacking onto the Braille-reading function. To test that idea, they began studying congenitally blind children, including some who had not learned Braille yet. They reasoned that if their hypothesis were correct, the occipital lobe would be gradually recruited for language processing as the children learned Braille.

However, they found that this was not the case. Instead, children as young as 4 already have language-related activity in the occipital lobe.

“The response of occipital cortex to language is not affected by Braille acquisition,” Saxe says. “It happens before Braille and it doesn’t increase with Braille.”

Language-related occipital activity was similar among all of the 19 blind children, who ranged in age from 4 to 17, suggesting that the entire process of occipital recruitment for language processing takes place before the age of 4, Saxe says. Bedny and Saxe have previously shown that this transition occurs only in people blind from birth, suggesting that there is an early critical period after which the cortex loses much of its plasticity.

The new study represents a huge step forward in understanding how the occipital cortex can take on new functions, says Ione Fine, an associate professor of psychology at the University of Washington.

“One thing that has been missing is an understanding of the developmental timeline,” says Fine, who was not involved in the research. “The insight here is that you get plasticity for language separate from plasticity for Braille and separate from plasticity for auditory processing.”

Language skills

The findings raise the question of how the extra language-processing centers in the occipital lobe affect language skills.

“This is a question we’ve always wondered about,” Saxe says. “Does it mean you’re better at those functions because you have more of your cortex doing it? Does it mean you’re more resilient in those functions because now you have more redundancy in your mechanism for doing it? You could even imagine the opposite: Maybe you’re less good at those functions because they’re distributed in an inefficient or atypical way.”

There are hints that the occipital lobe’s contribution to language-related functions “takes the pressure off the frontal cortex,” where language processing normally occurs, Saxe says. Other researchers have shown that suppressing left frontal cortex activity with transcranial magnetic stimulation interferes with language function in sighted people, but not in the congenitally blind.

This leads to the intriguing prediction that a congenitally blind person who suffers a stroke in the left frontal cortex may retain much more language ability than a sighted person would, Saxe says, although that hypothesis has not been tested.

Saxe’s lab is now studying children under 4 to try to learn more about how cortical functions develop early in life, while Bedny is investigating whether the occipital lobe participates in functions other than language in congenitally blind people.

Study links brain anatomy, academic achievement, and family income

Many years of research have shown that for students from lower-income families, standardized test scores and other measures of academic success tend to lag behind those of wealthier students.

A new study led by researchers at MIT and Harvard University offers another dimension to this so-called “achievement gap”: After imaging the brains of high- and low-income students, they found that the higher-income students had thicker brain cortex in areas associated with visual perception and knowledge accumulation. Furthermore, these differences also correlated with one measure of academic achievement — performance on standardized tests.

“Just as you would expect, there’s a real cost to not living in a supportive environment. We can see it not only in test scores, in educational attainment, but within the brains of these children,” says MIT’s John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, professor of brain and cognitive sciences, and one of the study’s authors. “To me, it’s a call to action. You want to boost the opportunities for those for whom it doesn’t come easily in their environment.”

This study did not explore possible reasons for these differences in brain anatomy. However, previous studies have shown that lower-income students are more likely to suffer from stress in early childhood, have more limited access to educational resources, and receive less exposure to spoken language early in life. These factors have all been linked to lower academic achievement.

In recent years, the achievement gap in the United States between high- and low-income students has widened, even as gaps along lines of race and ethnicity have narrowed, says Martin West, an associate professor of education at the Harvard Graduate School of Education and an author of the new study.

“The gap in student achievement, as measured by test scores between low-income and high-income students, is a pervasive and longstanding phenomenon in American education, and indeed in education systems around the world,” he says. “There’s a lot of interest among educators and policymakers in trying to understand the sources of those achievement gaps, but even more interest in possible strategies to address them.”

Allyson Mackey, a postdoc at MIT’s McGovern Institute for Brain Research, is the lead author of the paper, which appears the journal Psychological Science. Other authors are postdoc Amy Finn; graduate student Julia Leonard; Drew Jacoby-Senghor, a postdoc at Columbia Business School; and Christopher Gabrieli, chair of the nonprofit Transforming Education.

Explaining the gap

The study included 58 students — 23 from lower-income families and 35 from higher-income families, all aged 12 or 13. Low-income students were defined as those who qualify for a free or reduced-price school lunch.

The researchers compared students’ scores on the Massachusetts Comprehensive Assessment System (MCAS) with brain scans of a region known as the cortex, which is key to functions such as thought, language, sensory perception, and motor command.

Using magnetic resonance imaging (MRI), they discovered differences in the thickness of parts of the cortex in the temporal and occipital lobes, whose primary roles are in vision and storing knowledge. Those differences correlated to differences in both test scores and family income. In fact, differences in cortical thickness in these brain regions could explain as much as 44 percent of the income achievement gap found in this study.

Previous studies have also shown brain anatomy differences associated with income, but did not link those differences to academic achievement.

“A number of labs have reported differences in children’s brain structures as a function of family income, but this is the first to relate that to variation in academic achievement,” says Kimberly Noble, an assistant professor of pediatrics at Columbia University who was not part of the research team.

In most other measures of brain anatomy, the researchers found no significant differences. The amount of white matter — the bundles of axons that connect different parts of the brain — did not differ, nor did the overall surface area of the brain cortex.

The researchers point out that the structural differences they did find are not necessarily permanent. “There’s so much strong evidence that brains are highly plastic,” says Gabrieli, who is also a member of the McGovern Institute. “Our findings don’t mean that further educational support, home support, all those things, couldn’t make big differences.”

In a follow-up study, the researchers hope to learn more about what types of educational programs might help to close the achievement gap, and if possible, investigate whether these interventions also influence brain anatomy.

“Over the past decade we’ve been able to identify a growing number of educational interventions that have managed to have notable impacts on students’ academic achievement as measured by standardized tests,” West says. “What we don’t know anything about is the extent to which those interventions — whether it be attending a very high-performing charter school, or being assigned to a particularly effective teacher, or being exposed to a high-quality curricular program — improves test scores by altering some of the differences in brain structure that we’ve documented, or whether they had those effects by other means.”

The research was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

The research was funded by the National Institute of Child Health and Human Development and the National Science Foundation.

When good people do bad things

When people get together in groups, unusual things can happen — both good and bad. Groups create important social institutions that an individual could not achieve alone, but there can be a darker side to such alliances: Belonging to a group makes people more likely to harm others outside the group.

“Although humans exhibit strong preferences for equity and moral prohibitions against harm in many contexts, people’s priorities change when there is an ‘us’ and a ‘them,’” says Rebecca Saxe, an associate professor of cognitive neuroscience at MIT. “A group of people will often engage in actions that are contrary to the private moral standards of each individual in that group, sweeping otherwise decent individuals into ‘mobs’ that commit looting, vandalism, even physical brutality.”

Several factors play into this transformation. When people are in a group, they feel more anonymous, and less likely to be caught doing something wrong. They may also feel a diminished sense of personal responsibility for collective actions.

Saxe and colleagues recently studied a third factor that cognitive scientists believe may be involved in this group dynamic: the hypothesis that when people are in groups, they “lose touch” with their own morals and beliefs, and become more likely to do things that they would normally believe are wrong.

In a study that recently went online in the journal NeuroImage, the researchers measured brain activity in a part of the brain involved in thinking about oneself. They found that in some people, this activity was reduced when the subjects participated in a competition as part of a group, compared with when they competed as individuals. Those people were more likely to harm their competitors than people who did not exhibit this decreased brain activity.

“This process alone does not account for intergroup conflict: Groups also promote anonymity, diminish personal responsibility, and encourage reframing harmful actions as ‘necessary for the greater good.’ Still, these results suggest that at least in some cases, explicitly reflecting on one’s own personal moral standards may help to attenuate the influence of ‘mob mentality,’” says Mina Cikara, a former MIT postdoc and lead author of the NeuroImage paper.

Group dynamics

Cikara, who is now an assistant professor at Carnegie Mellon University, started this research project after experiencing the consequences of a “mob mentality”: During a visit to Yankee Stadium, her husband was ceaselessly heckled by Yankees fans for wearing a Red Sox cap. “What I decided to do was take the hat from him, thinking I would be a lesser target by virtue of the fact that I was a woman,” Cikara says. “I was so wrong. I have never been called names like that in my entire life.”

The harassment, which continued throughout the trip back to Manhattan, provoked a strong reaction in Cikara, who isn’t even a Red Sox fan.

“It was a really amazing experience because what I realized was I had gone from being an individual to being seen as a member of ‘Red Sox Nation.’ And the way that people responded to me, and the way I felt myself responding back, had changed, by virtue of this visual cue — the baseball hat,” she says. “Once you start feeling attacked on behalf of your group, however arbitrary, it changes your psychology.”

Cikara, then a third-year graduate student at Princeton University, started to investigate the neural mechanisms behind the group dynamics that produce bad behavior. In the new study, done at MIT, Cikara, Saxe (who is also an associate member of MIT’s McGovern Institute for Brain Research), former Harvard University graduate student Anna Jenkins, and former MIT lab manager Nicholas Dufour focused on a part of the brain called the medial prefrontal cortex. When someone is reflecting on himself or herself, this part of the brain lights up in functional magnetic resonance imaging (fMRI) brain scans.

A couple of weeks before the study participants came in for the experiment, the researchers surveyed each of them about their social-media habits, as well as their moral beliefs and behavior. This allowed the researchers to create individualized statements for each subject that were true for that person — for example, “I have stolen food from shared refrigerators” or “I always apologize after bumping into someone.”

When the subjects arrived at the lab, their brains were scanned as they played a game once on their own and once as part of a team. The purpose of the game was to press a button if they saw a statement related to social media, such as “I have more than 600 Facebook friends.”

The subjects also saw their personalized moral statements mixed in with sentences about social media. Brain scans revealed that when subjects were playing for themselves, the medial prefrontal cortex lit up much more when they read moral statements about themselves than statements about others, consistent with previous findings. However, during the team competition, some people showed a much smaller difference in medial prefrontal cortex activation when they saw the moral statements about themselves compared to those about other people.

Those people also turned out to be much more likely to harm members of the competing group during a task performed after the game. Each subject was asked to select photos that would appear with the published study, from a set of four photos apiece of two teammates and two members of the opposing team. The subjects with suppressed medial prefrontal cortex activity chose the least flattering photos of the opposing team members, but not of their own teammates.

“This is a nice way of using neuroimaging to try to get insight into something that behaviorally has been really hard to explore,” says David Rand, an assistant professor of psychology at Yale University who was not involved in the research. “It’s been hard to get a direct handle on the extent to which people within a group are tapping into their own understanding of things versus the group’s understanding.”

Getting lost

The researchers also found that after the game, people with reduced medial prefrontal cortex activity had more difficulty remembering the moral statements they had heard during the game.

“If you need to encode something with regard to the self and that ability is somehow undermined when you’re competing with a group, then you should have poor memory associated with that reduction in medial prefrontal cortex signal, and that’s exactly what we see,” Cikara says.

Cikara hopes to follow up on these findings to investigate what makes some people more likely to become “lost” in a group than others. She is also interested in studying whether people are slower to recognize themselves or pick themselves out of a photo lineup after being absorbed in a group activity.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Air Force Office of Scientific Research, and the Packard Foundation.