Brain’s language center has multiple roles

A century and a half ago, French physician Pierre Paul Broca found that patients with damage to part of the brain’s frontal lobe were unable to speak more than a few words. Later dubbed Broca’s area, this region is believed to be critical for speech production and some aspects of language comprehension.

However, in recent years neuroscientists have observed activity in Broca’s area when people perform cognitive tasks that have nothing to do with language, such as solving math problems or holding information in working memory. Those findings have stimulated debate over whether Broca’s area is specific to language or plays a more general role in cognition.

A new study from MIT may help resolve this longstanding question. The researchers, led by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, found that Broca’s area actually consists of two distinct subunits. One of these focuses selectively on language processing, while the other is part of a brainwide network that appears to act as a central processing unit for general cognitive functions.

“I think we’ve shown pretty convincingly that there are two distinct bits that we should not be treating as a single region, and perhaps we shouldn’t even be talking about “Broca’s area” because it’s not a functional unit,” says Evelina Fedorenko, a research scientist in Kanwisher’s lab and lead author of the new study, which recently appeared in the journal Current Biology.

Kanwisher and Fedorenko are members of MIT’s Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research. John Duncan, a professor of neuroscience at the Cognition and Brain Sciences Unit of the Medical Research Council in the United Kingdom, is also an author of the paper.

A general role

Broca’s area is located in the left inferior frontal cortex, above and behind the left eye. For this study, the researchers set out to pinpoint the functions of distinct sections of Broca’s area by scanning subjects with functional magnetic resonance imaging (fMRI) as they performed a variety of cognitive tasks.

To locate language-selective areas, the researchers asked subjects to read either meaningful sentences or sequences of nonwords. A subset of Broca’s area lit up much more when the subjects processed meaningful sentences than when they had to interpret nonwords.

The researchers then measured brain activity as the subjects performed easy and difficult versions of general cognitive tasks, such as doing a math problem or holding a set of locations in memory. Parts of Broca’s area lit up during the more demanding versions of those tasks. Critically, however, these regions were spatially distinct from the regions involved in the language task.

These data allowed the researchers to map, for each subject, two distinct regions of Broca’s area — one selectively involved in language, the other involved in responding to many demanding cognitive tasks. The general region surrounds the language region, but the exact shapes and locations of the borders between the two vary from person to person.

The general-function region of Broca’s area appears to be part of a larger network sometimes called the multiple demand network, which is active when the brain is tackling a challenging task that requires a great deal of focus. This network is distributed across frontal and parietal lobes in both hemispheres of the brain, and all of its components appear to communicate with one another. The language-selective section of Broca’s area also appears to be part of a larger network devoted to language processing, spread throughout the brain’s left hemisphere.

Mapping functions

The findings provide evidence that Broca’s area should not be considered to have uniform functionality, says Peter Hagoort, a professor of cognitive neuroscience at Radboud University Nijmegen in the Netherlands. Hagoort, who was not involved in this study, adds that more work is needed to determine whether the language-selective areas might also be involved in any other aspects of cognitive function. “For instance, the language-selective region might play a role in the perception of music, which was not tested in the current study,” he says.

The researchers are now trying to determine how the components of the language network and the multiple demand network communicate internally, and how the two networks communicate with each other. They also hope to further investigate the functions of the two components of Broca’s area.

“In future studies, we should examine those subregions separately and try to characterize them in terms of their contribution to various language processes and other cognitive processes,” Fedorenko says.

The team is also working with scientists at Massachusetts General Hospital to study patients with a form of neurodegeneration that gradually causes loss of the ability to speak and understand language. This disorder, known as primary progressive aphasia, appears to selectively target the language-selective network, including the language component of Broca’s area.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Ellison Medical Foundation and the U.K. Medical Research Council.

Predicting how patients respond to therapy

Social anxiety is usually treated with either cognitive behavioral therapy or medications. However, it is currently impossible to predict which treatment will work best for a particular patient. The team of researchers from MIT, Boston University (BU) and Massachusetts General Hospital (MGH) found that the effectiveness of therapy could be predicted by measuring patients’ brain activity as they looked at photos of faces, before the therapy sessions began.

The findings, published this week in the Archives of General Psychiatry, may help doctors more accurately choose treatments for social anxiety disorder, which is estimated to affect around 15 million people in the United States.

“Our vision is that some of these measures might direct individuals to treatments that are more likely to work for them,” says John Gabrieli, the Grover M. Hermann Professor of Brain and Cognitive Sciences at MIT, a member of the McGovern Institute for Brain Research and senior author of the paper.

Lead authors of the paper are MIT postdoc Oliver Doehrmann and Satrajit Ghosh, a research scientist in the McGovern Institute.

Choosing treatments

Sufferers of social anxiety disorder experience intense fear in social situations, interfering with their ability to function in daily life. Cognitive behavioral therapy aims to change the thought and behavior patterns that lead to anxiety. For social anxiety disorder patients, that might include learning to reverse the belief that others are watching or judging them.

The new paper is part of a larger study that MGH and BU recently ran on cognitive behavioral therapy for social anxiety, led by Mark Pollack, director of the Center for Anxiety and Traumatic Stress Disorders at MGH, and Stefan Hofmann, director of the Social Anxiety Program at BU.

“This was a chance to ask if these brain measures, taken before treatment, would be informative in ways above and beyond what physicians can measure now, and determine who would be responsive to this treatment,” Gabrieli says.

Currently doctors might choose a treatment based on factors such as ease of taking pills versus going to therapy, the possibility of drug side effects, or what the patients’ insurance will cover. “From a science perspective there’s very little evidence about which treatment is optimal for a person,” Gabrieli says.

The researchers used functional magnetic resonance imaging (fMRI) to image the brains of patients before and after treatment. There have been many imaging studies showing brain differences between healthy people and patients with neuropsychiatric disorders, but so far imaging has not been established as a way to predict patient response to particular treatments.

Measuring brain activity

In the new study, the researchers measured differences in brain activity as patients looked at images of angry or neutral faces. After 12 weeks of cognitive behavioral therapy, patients’ social anxiety levels were tested. The researchers found that patients who had shown a greater difference in activity in high-level visual processing areas during the face-response task showed the most improvement after therapy.

The findings are an important step towards improving doctors’ ability to choose the right treatment for psychiatric disorders, says Greg Siegle, associate professor of psychiatry at the University of Pittsburgh. “It’s really critical that somebody do this work, and they did it very well,” says Siegle, who was not part of the research team. “It moves the field forward, and brings psychology into more of a rigorous science, using neuroscience to distinguish between clinical cases that at first appear homogeneous.”

Gabrieli says it’s unclear why activity in brain regions involved with visual processing would be a good predictor of treatment outcome. One possibility is that patients who benefited more were those whose brains were already adept at segregating different types of experiences, Gabrieli says.

The researchers are now planning a follow-up study to investigate whether brain scans can predict differences in response between cognitive behavioral therapy and drug treatment.

“Right now, all by itself, we’re just giving somebody encouraging or discouraging news about the likely outcome of therapy,” Gabrieli says. “The really valuable thing would be if it turns out to be differentially sensitive to different treatment choices.”

The research was funded by the Poitras Center for Affective Disorders Research and the National Institute of Mental Health.

Thinking about others is not child’s play

When you try to read other people’s thoughts, or guess why they are behaving a certain way, you employ a skill known as theory of mind. This skill, as measured by false-belief tests, takes time to develop: In children, it doesn’t start appearing until the age of 4 or 5.

Several years ago, MIT neuroscientist Rebecca Saxe showed that in adults, theory of mind is seated in a specific brain region known as the right temporo-parietal junction (TPJ). Saxe and colleagues at MIT have now shown how brain activity in the TPJ changes as children learn to reason about others’ thoughts and feelings.

The findings suggest that the right TPJ becomes more specific to theory of mind as children age, taking on adult patterns of activity over time. The researchers also showed that the more selectively the right TPJ is activated when children listen to stories about other people’s thoughts, the better those children perform in tasks that require theory of mind.

The paper, published in the July 31 online edition of the journal Child Development, lays the groundwork for exploring theory-of-mind impairments in autistic children, says Hyowon Gweon, a graduate student in Saxe’s lab and lead author of the paper.

Given that we know this is what typically developing kids show, the next question to ask is how it compares to autistic children who exhibit marked impairments in their ability to think about other people’s minds,” Gweon says. “Do they show differences from typically developing kids in their neural activity?”

Saxe, an associate professor of brain and cognitive sciences and associate member of MIT’s McGovern Institute for Brain Research, is senior author of the Child Development paper. Other authors are Marina Bedny, a postdoc in Saxe’s lab, and David Dodell-Feder, a graduate student at Harvard University.

Tracking theory of mind

The classic test for theory of mind is the false-belief test, sometimes called the Sally-Anne test. Experimenters often use dolls or puppets to perform a short skit: Sally takes a marble and hides it in her basket, then leaves the room. Anne then removes the marble and puts it in her own box. When Sally returns, the child watching the skit is asked: Where will Sally look for her marble?

Children with well-developed theory of mind realize that Sally will look where she thinks the marble is: her own basket. However, before children develop this skill, they don’t realize that Sally’s beliefs may not correspond to reality. Therefore, they believe she will look for the marble where it actually is, in Anne’s box.

Previous studies have shown that children start making accurate predictions in the false belief test around age 4, but this happens much later, if ever, in autistic children.

In this study, the researchers used functional magnetic resonance imaging (fMRI) to look for a link between the development of theory of mind and changes in neural activity in the TPJ. They studied 20 children, ranging from 5 to 11 years old.

Each child participated in two sets of experiments. First, the child was scanned in the MRI machine as he or she listened to different types of stories. One type focused on people’s mental states, another also focused on people but only on their physical appearances or actions, and a third type of story focused on physical objects.

The researchers measured activity across the brain as the children listened to different stories. By subtracting neural activity as they listen to stories about physical states from activity as they listen to stories about people’s mental states, the researchers can determine which brain regions are exclusive to interpreting people’s mental states.

In younger children, both the left and right TPJ were active in response to stories about people’s mental states, but they were also active when the children listened to stories about people’s appearances or actions. However, in older children, both regions became more specifically tuned to interpreting people’s thoughts and emotions, and were no longer responsive to people’s appearances or actions.

For the second task, done outside of the scanner, the researchers gave children tests similar to the classic Sally-Anne test, as well as harder questions that required making moral judgments, to measure their theory-of-mind abilities. They found that the degree to which activity in the right TPJ was specific to others’ mental states correlated with the children’s performance in theory-of-mind tasks.

Kristin Lagattuta, an associate professor of psychology at the University of California at Davis, says the paper makes an important contribution to understanding how theory of mind develops in older children. “Getting more insight into the neural basis of the behavioral development we’re seeing at these ages is exciting,” says Lagattuta, who was not involved in the research.

In an ongoing study of autistic children undergoing the same type of tests, the researchers hope to learn more about the neural basis of the theory-of-mind impairments seen in autistic children.

“So little is known about differences in neural mechanisms that contribute to these kinds of impairments,” Gweon says. “Understanding the developmental changes in brain regions related to theory of mind is going to be critical to think of measures that can help them in the real world.”

The research was funded by the Ellison Medical Foundation, the Packard Foundation, the John Merck Scholars Program, a National Science Foundation Career Award and an Ewha 21st Century Scholarship.

Detecting the brain’s magnetic signals with MEG

Magnetoencephalography (MEG) is a noninvasive technique for measuring neuronal activity in the human brain. Electrical currents flowing through neurons generate weak magnetic fields that can be recorded at the surface of the head using very sensitive magnetic detectors known as superconducting quantum interference devices (SQUIDs).

MEG is a purely passive method that relies on detection of signals that are produced naturally by the brain. It does not involve exposure to radiation or strong magnetic fields, and there are no known hazards associated with MEG.

MEG was developed at MIT in the early 1970s by physicist David Cohen. Photo: David Cohen

Magnetic signals from the brain are very small compared to the magnetic fluctuations that are produced by interfering sources such as nearby electrical equipment or moving metal objects. Therefore MEG scans are typically performed within a special magnetically shielded room that blocks this external interference.

It is fitting that MIT should have a state-of-the-art MEG scanner, since the MEG technology was pioneered by David Cohen in the early 1970s while he was a member of MIT’s Francis Bitter Magnet Laboratory.

MEG can detect the timing of magnetic signals with millisecond precision. This is the timescale on which neurons communicate, and MEG is thus well suited to measuring the rapid signals that reflect communication between different parts of the human brain.

MEG is complementary to other brain imaging modalities such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which depend on changes in blood flow, and which have higher spatial resolution but much lower temporal resolution than MEG.

Our MEG scanner, an Elekta Neuromag Triux with 306 channels plus 128 channels for EEG, was installed in 2011 and is the first of its kind in North America. It is housed within a magnetically shielded room to reduce background noise.

The MEG lab is part of the Martinos Imaging Center at MIT, operating as a core facility, and accessible to all members of the local research community. Potential users should contact Dimitrios Pantazis for more information.

The MEG Lab was made possible through a grant from the National Science Foundation and through the generous support of the following donors: Thomas F. Peterson, Jr. ’57; Edward and Kay Poitras; The Simons Foundation; and an anonymous donor.

Faces have a special place in the brain

Are you tempted to trade in last year’s digital camera for a newer model with even more megapixels? Researchers who make images of the human brain have the same obsession with increasing their pixel count, which increases the sharpness (or “spatial resolution”) of their images. And improvements in spatial resolution are happening as fast in brain imaging research as they are in digital camera technology.

Nancy Kanwisher, Rebecca Frye Schwarzlose and Christopher Baker at the McGovern Institute for Brain Research at MIT are now using their higher-resolution scans to produce much more detailed images of the brain than were possible just a couple years ago. Just as “hi-def” TV shows clearer views of a football game, these finely grained images are providing new answers to some very old questions in brain research.

One such question hinges on whether the brain is comprised of highly specialized parts, each optimized to conduct a single, very specific function. Or is it instead a general-purpose device that handles many tasks but specializes in none?

Using the higher-resolution scans, the Kanwisher team now provides some of the strongest evidence ever reported for extreme specialization. Their study appeared in the Nov. 23 issue of the Journal of Neuroscience.

The study focuses on face recognition, long considered an example of brain specialization. In the 1990s, researchers including Kanwisher identified a region known as the fusiform face area (FFA) as a potential brain center for face recognition. They pointed to evidence from brain-imaging experiments, and to the fact that people with damage to this brain region cannot recognize faces, even those of their family and closest friends.

However, more recent brain-imaging experiments have challenged this claimed specialization by showing that this region also responds strongly when people see images of bodies and body parts, not just faces. The new study now answers this challenge and supports the original specialization theory.

Schwarzlose suspected that the strong response of the face area to both faces and bodies might result from the blurring together of two distinct but neighboring brain regions that are too close together to distinguish at standard scanning resolutions.

To test this idea, Schwarzlose and her colleagues increased the resolution of their images (like increasing the megapixels on a digital camera) ten-fold to get sharper images of brain function. Indeed, at this higher resolution they could clearly distinguish two neighboring regions. One was primarily active when people saw faces (not bodies), and the other when people saw bodies (not faces).

This finding supports the original claim that the face area is in fact dedicated exclusively to face processing. The results further demonstrate a similar degree of specialization for the new “body region” next door.

The team’s new discovery highlights the importance of improved spatial resolution in studying the structure of the human brain. Just as a higher megapixel digital camera can show greater detail, new brain imaging methods are revealing the finer-grained structure of the human brain. Schwarzlose and her colleagues plan to use the new scanning methods to look for even finer levels of organization within the newly distinguished face and body areas. They also want to figure out how and why the brain regions for faces and bodies land next to each other in the first place.

Kanwisher is the Ellen Swallow Richards Professor of Cognitive Neuroscience. Her colleagues on this work are Schwarzlose, a graduate student in brain and cognitive sciences, and Baker, a postdoctoral researcher in the department.

The research was supported by the National Institutes of Health, the National Center for Research Resources, the Mind Institute, and the National Science Foundation’s Graduate Research Fellowship Program.