Delving deep into the brain

Launched in 2013, the national BRAIN Initiative aims to revolutionize our understanding of cognition by mapping the activity of every neuron in the human brain, revealing how brain circuits interact to create memories, learn new skills, and interpret the world around us.

Before that can happen, neuroscientists need new tools that will let them probe the brain more deeply and in greater detail, says Alan Jasanoff, an MIT associate professor of biological engineering. “There’s a general recognition that in order to understand the brain’s processes in comprehensive detail, we need ways to monitor neural function deep in the brain with spatial, temporal, and functional precision,” he says.

Jasanoff and colleagues have now taken a step toward that goal: They have established a technique that allows them to track neural communication in the brain over time, using magnetic resonance imaging (MRI) along with a specialized molecular sensor. This is the first time anyone has been able to map neural signals with high precision over large brain regions in living animals, offering a new window on brain function, says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research.

His team used this molecular imaging approach, described in the May 1 online edition of Science, to study the neurotransmitter dopamine in a region called the ventral striatum, which is involved in motivation, reward, and reinforcement of behavior. In future studies, Jasanoff plans to combine dopamine imaging with functional MRI techniques that measure overall brain activity to gain a better understanding of how dopamine levels influence neural circuitry.

“We want to be able to relate dopamine signaling to other neural processes that are going on,” Jasanoff says. “We can look at different types of stimuli and try to understand what dopamine is doing in different brain regions and relate it to other measures of brain function.”

Tracking dopamine

Dopamine is one of many neurotransmitters that help neurons to communicate with each other over short distances. Much of the brain’s dopamine is produced by a structure called the ventral tegmental area (VTA). This dopamine travels through the mesolimbic pathway to the ventral striatum, where it combines with sensory information from other parts of the brain to reinforce behavior and help the brain learn new tasks and motor functions. This circuit also plays a major role in addiction.
To track dopamine’s role in neural communication, the researchers used an MRI sensor they had previously designed, consisting of an iron-containing protein that acts as a weak magnet. When the sensor binds to dopamine, its magnetic interactions with the surrounding tissue weaken, which dims the tissue’s MRI signal. This allows the researchers to see where in the brain dopamine is being released. The researchers also developed an algorithm that lets them calculate the precise amount of dopamine present in each fraction of a cubic millimeter of the ventral striatum.

After delivering the MRI sensor to the ventral striatum of rats, Jasanoff’s team electrically stimulated the mesolimbic pathway and was able to detect exactly where in the ventral striatum dopamine was released. An area known as the nucleus accumbens core, known to be one of the main targets of dopamine from the VTA, showed the highest levels. The researchers also saw that some dopamine is released in neighboring regions such as the ventral pallidum, which regulates motivation and emotions, and parts of the thalamus, which relays sensory and motor signals in the brain.

Each dopamine stimulation lasted for 16 seconds and the researchers took an MRI image every eight seconds, allowing them to track how dopamine levels changed as the neurotransmitter was released from cells and then disappeared. “We could divide up the map into different regions of interest and determine dynamics separately for each of those regions,” Jasanoff says.

He and his colleagues plan to build on this work by expanding their studies to other parts of the brain, including the areas most affected by Parkinson’s disease, which is caused by the death of dopamine-generating cells. Jasanoff’s lab is also working on sensors to track other neurotransmitters, allowing them to study interactions between neurotransmitters during different tasks.

The paper’s lead author is postdoc Taekwan Lee. Technical assistant Lili Cai and postdocs Victor Lelyveld and Aviad Hai also contributed to the research, which was funded by the National Institutes of Health and the Defense Advanced Research Projects Agency.

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

MRI reveals genetic activity

Doctors commonly use magnetic resonance imaging (MRI) to diagnose tumors, damage from stroke, and many other medical conditions. Neuroscientists also rely on it as a research tool for identifying parts of the brain that carry out different cognitive functions.

Now, a team of biological engineers at MIT is trying to adapt MRI to a much smaller scale, allowing researchers to visualize gene activity inside the brains of living animals. Tracking these genes with MRI would enable scientists to learn more about how the genes control processes such as forming memories and learning new skills, says Alan Jasanoff, an MIT associate professor of biological engineering and leader of the research team.

“The dream of molecular imaging is to provide information about the biology of intact organisms, at the molecule level,” says Jasanoff, who is also an associate member of MIT’s McGovern Institute for Brain Research. “The goal is to not have to chop up the brain, but instead to actually see things that are happening inside.”

To help reach that goal, Jasanoff and colleagues have developed a new way to image a “reporter gene” — an artificial gene that turns on or off to signal events in the body, much like an indicator light on a car’s dashboard. In the new study, the reporter gene encodes an enzyme that interacts with a magnetic contrast agent injected into the brain, making the agent visible with MRI. This approach, described in a recent issue of the journal Chemical Biology, allows researchers to determine when and where that reporter gene is turned on.

An on/off switch

MRI uses magnetic fields and radio waves that interact with protons in the body to produce detailed images of the body’s interior. In brain studies, neuroscientists commonly use functional MRI to measure blood flow, which reveals which parts of the brain are active during a particular task. When scanning other organs, doctors sometimes use magnetic “contrast agents” to boost the visibility of certain tissues.

The new MIT approach includes a contrast agent called a manganese porphyrin and the new reporter gene, which codes for a genetically engineered enzyme that alters the electric charge on the contrast agent. Jasanoff and colleagues designed the contrast agent so that it is soluble in water and readily eliminated from the body, making it difficult to detect by MRI. However, when the engineered enzyme, known as SEAP, slices phosphate molecules from the manganese porphyrin, the contrast agent becomes insoluble and starts to accumulate in brain tissues, allowing it to be seen.

The natural version of SEAP is found in the placenta, but not in other tissues. By injecting a virus carrying the SEAP gene into the brain cells of mice, the researchers were able to incorporate the gene into the cells’ own genome. Brain cells then started producing the SEAP protein, which is secreted from the cells and can be anchored to their outer surfaces. That’s important, Jasanoff says, because it means that the contrast agent doesn’t have to penetrate the cells to interact with the enzyme.

Researchers can then find out where SEAP is active by injecting the MRI contrast agent, which spreads throughout the brain but accumulates only near cells producing the SEAP protein.

Exploring brain function

In this study, which was designed to test this general approach, the detection system revealed only whether the SEAP gene had been successfully incorporated into brain cells. However, in future studies, the researchers intend to engineer the SEAP gene so it is only active when a particular gene of interest is turned on.

Jasanoff first plans to link the SEAP gene with so-called “early immediate genes,” which are necessary for brain plasticity — the weakening and strengthening of connections between neurons, which is essential to learning and memory.

“As people who are interested in brain function, the top questions we want to address are about how brain function changes patterns of gene expression in the brain,” Jasanoff says. “We also imagine a future where we might turn the reporter enzyme on and off when it binds to neurotransmitters, so we can detect changes in neurotransmitter levels as well.”

Assaf Gilad, an assistant professor of radiology at Johns Hopkins University, says the MIT team has taken a “very creative approach” to developing noninvasive, real-time imaging of gene activity. “These kinds of genetically engineered reporters have the potential to revolutionize our understanding of many biological processes,” says Gilad, who was not involved in the study.

The research was funded by the Raymond and Beverly Sackler Foundation, the National Institutes of Health, and an MIT-Germany Seed Fund grant. The paper’s lead author is former MIT postdoc Gil Westmeyer; other authors are former MIT technical assistant Yelena Emer and Jutta Lintelmann of the German Research Center for Environmental Health.

Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

When and where

Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.

Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.

To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.

In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and manmade objects. Each image was shown for half a second.

“We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula A. Martinos Imaging Center at the McGovern Institute.

Millisecond by millisecond

By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.

About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.

The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge University.

“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.

The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.

Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal, or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.

“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”

The research was funded by the National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation.

Even when test scores go up, some cognitive abilities don’t

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.

“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, despite learning a lot by being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

Measuring cognition

This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.

The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.

The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.

In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.

“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.

Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.

However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills.

“What’s nice about this study is it seems to narrow down the possibilities of what educational interventions are achieving,” says Daniel Willingham, a professor of psychology at the University of Virginia who was not part of the research team. “We’re usually primarily concerned with outcomes in schools, but the underlying mechanisms are also important.”

The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.

Implications for education

Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.

He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.

“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.

Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.

The research was a collaboration with the Center for Education Policy Research at Harvard University, Transforming Education, and Brown University, and was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

Brain scans may help diagnose dyslexia

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

The research was funded by the National Institutes of Health, the Poitras Center for Affective Disorders Research, the Ellison Medical Foundation and the Halis Family Foundation.

Brain’s language center has multiple roles

A century and a half ago, French physician Pierre Paul Broca found that patients with damage to part of the brain’s frontal lobe were unable to speak more than a few words. Later dubbed Broca’s area, this region is believed to be critical for speech production and some aspects of language comprehension.

However, in recent years neuroscientists have observed activity in Broca’s area when people perform cognitive tasks that have nothing to do with language, such as solving math problems or holding information in working memory. Those findings have stimulated debate over whether Broca’s area is specific to language or plays a more general role in cognition.

A new study from MIT may help resolve this longstanding question. The researchers, led by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, found that Broca’s area actually consists of two distinct subunits. One of these focuses selectively on language processing, while the other is part of a brainwide network that appears to act as a central processing unit for general cognitive functions.

“I think we’ve shown pretty convincingly that there are two distinct bits that we should not be treating as a single region, and perhaps we shouldn’t even be talking about “Broca’s area” because it’s not a functional unit,” says Evelina Fedorenko, a research scientist in Kanwisher’s lab and lead author of the new study, which recently appeared in the journal Current Biology.

Kanwisher and Fedorenko are members of MIT’s Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research. John Duncan, a professor of neuroscience at the Cognition and Brain Sciences Unit of the Medical Research Council in the United Kingdom, is also an author of the paper.

A general role

Broca’s area is located in the left inferior frontal cortex, above and behind the left eye. For this study, the researchers set out to pinpoint the functions of distinct sections of Broca’s area by scanning subjects with functional magnetic resonance imaging (fMRI) as they performed a variety of cognitive tasks.

To locate language-selective areas, the researchers asked subjects to read either meaningful sentences or sequences of nonwords. A subset of Broca’s area lit up much more when the subjects processed meaningful sentences than when they had to interpret nonwords.

The researchers then measured brain activity as the subjects performed easy and difficult versions of general cognitive tasks, such as doing a math problem or holding a set of locations in memory. Parts of Broca’s area lit up during the more demanding versions of those tasks. Critically, however, these regions were spatially distinct from the regions involved in the language task.

These data allowed the researchers to map, for each subject, two distinct regions of Broca’s area — one selectively involved in language, the other involved in responding to many demanding cognitive tasks. The general region surrounds the language region, but the exact shapes and locations of the borders between the two vary from person to person.

The general-function region of Broca’s area appears to be part of a larger network sometimes called the multiple demand network, which is active when the brain is tackling a challenging task that requires a great deal of focus. This network is distributed across frontal and parietal lobes in both hemispheres of the brain, and all of its components appear to communicate with one another. The language-selective section of Broca’s area also appears to be part of a larger network devoted to language processing, spread throughout the brain’s left hemisphere.

Mapping functions

The findings provide evidence that Broca’s area should not be considered to have uniform functionality, says Peter Hagoort, a professor of cognitive neuroscience at Radboud University Nijmegen in the Netherlands. Hagoort, who was not involved in this study, adds that more work is needed to determine whether the language-selective areas might also be involved in any other aspects of cognitive function. “For instance, the language-selective region might play a role in the perception of music, which was not tested in the current study,” he says.

The researchers are now trying to determine how the components of the language network and the multiple demand network communicate internally, and how the two networks communicate with each other. They also hope to further investigate the functions of the two components of Broca’s area.

“In future studies, we should examine those subregions separately and try to characterize them in terms of their contribution to various language processes and other cognitive processes,” Fedorenko says.

The team is also working with scientists at Massachusetts General Hospital to study patients with a form of neurodegeneration that gradually causes loss of the ability to speak and understand language. This disorder, known as primary progressive aphasia, appears to selectively target the language-selective network, including the language component of Broca’s area.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Ellison Medical Foundation and the U.K. Medical Research Council.

Predicting how patients respond to therapy

Social anxiety is usually treated with either cognitive behavioral therapy or medications. However, it is currently impossible to predict which treatment will work best for a particular patient. The team of researchers from MIT, Boston University (BU) and Massachusetts General Hospital (MGH) found that the effectiveness of therapy could be predicted by measuring patients’ brain activity as they looked at photos of faces, before the therapy sessions began.

The findings, published this week in the Archives of General Psychiatry, may help doctors more accurately choose treatments for social anxiety disorder, which is estimated to affect around 15 million people in the United States.

“Our vision is that some of these measures might direct individuals to treatments that are more likely to work for them,” says John Gabrieli, the Grover M. Hermann Professor of Brain and Cognitive Sciences at MIT, a member of the McGovern Institute for Brain Research and senior author of the paper.

Lead authors of the paper are MIT postdoc Oliver Doehrmann and Satrajit Ghosh, a research scientist in the McGovern Institute.

Choosing treatments

Sufferers of social anxiety disorder experience intense fear in social situations, interfering with their ability to function in daily life. Cognitive behavioral therapy aims to change the thought and behavior patterns that lead to anxiety. For social anxiety disorder patients, that might include learning to reverse the belief that others are watching or judging them.

The new paper is part of a larger study that MGH and BU recently ran on cognitive behavioral therapy for social anxiety, led by Mark Pollack, director of the Center for Anxiety and Traumatic Stress Disorders at MGH, and Stefan Hofmann, director of the Social Anxiety Program at BU.

“This was a chance to ask if these brain measures, taken before treatment, would be informative in ways above and beyond what physicians can measure now, and determine who would be responsive to this treatment,” Gabrieli says.

Currently doctors might choose a treatment based on factors such as ease of taking pills versus going to therapy, the possibility of drug side effects, or what the patients’ insurance will cover. “From a science perspective there’s very little evidence about which treatment is optimal for a person,” Gabrieli says.

The researchers used functional magnetic resonance imaging (fMRI) to image the brains of patients before and after treatment. There have been many imaging studies showing brain differences between healthy people and patients with neuropsychiatric disorders, but so far imaging has not been established as a way to predict patient response to particular treatments.

Measuring brain activity

In the new study, the researchers measured differences in brain activity as patients looked at images of angry or neutral faces. After 12 weeks of cognitive behavioral therapy, patients’ social anxiety levels were tested. The researchers found that patients who had shown a greater difference in activity in high-level visual processing areas during the face-response task showed the most improvement after therapy.

The findings are an important step towards improving doctors’ ability to choose the right treatment for psychiatric disorders, says Greg Siegle, associate professor of psychiatry at the University of Pittsburgh. “It’s really critical that somebody do this work, and they did it very well,” says Siegle, who was not part of the research team. “It moves the field forward, and brings psychology into more of a rigorous science, using neuroscience to distinguish between clinical cases that at first appear homogeneous.”

Gabrieli says it’s unclear why activity in brain regions involved with visual processing would be a good predictor of treatment outcome. One possibility is that patients who benefited more were those whose brains were already adept at segregating different types of experiences, Gabrieli says.

The researchers are now planning a follow-up study to investigate whether brain scans can predict differences in response between cognitive behavioral therapy and drug treatment.

“Right now, all by itself, we’re just giving somebody encouraging or discouraging news about the likely outcome of therapy,” Gabrieli says. “The really valuable thing would be if it turns out to be differentially sensitive to different treatment choices.”

The research was funded by the Poitras Center for Affective Disorders Research and the National Institute of Mental Health.

Thinking about others is not child’s play

When you try to read other people’s thoughts, or guess why they are behaving a certain way, you employ a skill known as theory of mind. This skill, as measured by false-belief tests, takes time to develop: In children, it doesn’t start appearing until the age of 4 or 5.

Several years ago, MIT neuroscientist Rebecca Saxe showed that in adults, theory of mind is seated in a specific brain region known as the right temporo-parietal junction (TPJ). Saxe and colleagues at MIT have now shown how brain activity in the TPJ changes as children learn to reason about others’ thoughts and feelings.

The findings suggest that the right TPJ becomes more specific to theory of mind as children age, taking on adult patterns of activity over time. The researchers also showed that the more selectively the right TPJ is activated when children listen to stories about other people’s thoughts, the better those children perform in tasks that require theory of mind.

The paper, published in the July 31 online edition of the journal Child Development, lays the groundwork for exploring theory-of-mind impairments in autistic children, says Hyowon Gweon, a graduate student in Saxe’s lab and lead author of the paper.

Given that we know this is what typically developing kids show, the next question to ask is how it compares to autistic children who exhibit marked impairments in their ability to think about other people’s minds,” Gweon says. “Do they show differences from typically developing kids in their neural activity?”

Saxe, an associate professor of brain and cognitive sciences and associate member of MIT’s McGovern Institute for Brain Research, is senior author of the Child Development paper. Other authors are Marina Bedny, a postdoc in Saxe’s lab, and David Dodell-Feder, a graduate student at Harvard University.

Tracking theory of mind

The classic test for theory of mind is the false-belief test, sometimes called the Sally-Anne test. Experimenters often use dolls or puppets to perform a short skit: Sally takes a marble and hides it in her basket, then leaves the room. Anne then removes the marble and puts it in her own box. When Sally returns, the child watching the skit is asked: Where will Sally look for her marble?

Children with well-developed theory of mind realize that Sally will look where she thinks the marble is: her own basket. However, before children develop this skill, they don’t realize that Sally’s beliefs may not correspond to reality. Therefore, they believe she will look for the marble where it actually is, in Anne’s box.

Previous studies have shown that children start making accurate predictions in the false belief test around age 4, but this happens much later, if ever, in autistic children.

In this study, the researchers used functional magnetic resonance imaging (fMRI) to look for a link between the development of theory of mind and changes in neural activity in the TPJ. They studied 20 children, ranging from 5 to 11 years old.

Each child participated in two sets of experiments. First, the child was scanned in the MRI machine as he or she listened to different types of stories. One type focused on people’s mental states, another also focused on people but only on their physical appearances or actions, and a third type of story focused on physical objects.

The researchers measured activity across the brain as the children listened to different stories. By subtracting neural activity as they listen to stories about physical states from activity as they listen to stories about people’s mental states, the researchers can determine which brain regions are exclusive to interpreting people’s mental states.

In younger children, both the left and right TPJ were active in response to stories about people’s mental states, but they were also active when the children listened to stories about people’s appearances or actions. However, in older children, both regions became more specifically tuned to interpreting people’s thoughts and emotions, and were no longer responsive to people’s appearances or actions.

For the second task, done outside of the scanner, the researchers gave children tests similar to the classic Sally-Anne test, as well as harder questions that required making moral judgments, to measure their theory-of-mind abilities. They found that the degree to which activity in the right TPJ was specific to others’ mental states correlated with the children’s performance in theory-of-mind tasks.

Kristin Lagattuta, an associate professor of psychology at the University of California at Davis, says the paper makes an important contribution to understanding how theory of mind develops in older children. “Getting more insight into the neural basis of the behavioral development we’re seeing at these ages is exciting,” says Lagattuta, who was not involved in the research.

In an ongoing study of autistic children undergoing the same type of tests, the researchers hope to learn more about the neural basis of the theory-of-mind impairments seen in autistic children.

“So little is known about differences in neural mechanisms that contribute to these kinds of impairments,” Gweon says. “Understanding the developmental changes in brain regions related to theory of mind is going to be critical to think of measures that can help them in the real world.”

The research was funded by the Ellison Medical Foundation, the Packard Foundation, the John Merck Scholars Program, a National Science Foundation Career Award and an Ewha 21st Century Scholarship.

Detecting the brain’s magnetic signals with MEG

Magnetoencephalography (MEG) is a noninvasive technique for measuring neuronal activity in the human brain. Electrical currents flowing through neurons generate weak magnetic fields that can be recorded at the surface of the head using very sensitive magnetic detectors known as superconducting quantum interference devices (SQUIDs).

MEG is a purely passive method that relies on detection of signals that are produced naturally by the brain. It does not involve exposure to radiation or strong magnetic fields, and there are no known hazards associated with MEG.

MEG was developed at MIT in the early 1970s by physicist David Cohen. Photo: David Cohen

Magnetic signals from the brain are very small compared to the magnetic fluctuations that are produced by interfering sources such as nearby electrical equipment or moving metal objects. Therefore MEG scans are typically performed within a special magnetically shielded room that blocks this external interference.

It is fitting that MIT should have a state-of-the-art MEG scanner, since the MEG technology was pioneered by David Cohen in the early 1970s while he was a member of MIT’s Francis Bitter Magnet Laboratory.

MEG can detect the timing of magnetic signals with millisecond precision. This is the timescale on which neurons communicate, and MEG is thus well suited to measuring the rapid signals that reflect communication between different parts of the human brain.

MEG is complementary to other brain imaging modalities such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which depend on changes in blood flow, and which have higher spatial resolution but much lower temporal resolution than MEG.

Our MEG scanner, an Elekta Neuromag Triux with 306 channels plus 128 channels for EEG, was installed in 2011 and is the first of its kind in North America. It is housed within a magnetically shielded room to reduce background noise.

The MEG lab is part of the Martinos Imaging Center at MIT, operating as a core facility, and accessible to all members of the local research community. Potential users should contact Dimitrios Pantazis for more information.

The MEG Lab was made possible through a grant from the National Science Foundation and through the generous support of the following donors: Thomas F. Peterson, Jr. ’57; Edward and Kay Poitras; The Simons Foundation; and an anonymous donor.