Expanding our view of vision

Every time you open your eyes, visual information flows into your brain, which interprets what you’re seeing. Now, for the first time, MIT neuroscientists have noninvasively mapped this flow of information in the human brain with unique accuracy, using a novel brain-scanning technique.

This technique, which combines two existing technologies, allows researchers to identify precisely both the location and timing of human brain activity. Using this new approach, the MIT researchers scanned individuals’ brains as they looked at different images and were able to pinpoint, to the millisecond, when the brain recognizes and categorizes an object, and where these processes occur.

“This method gives you a visualization of ‘when’ and ‘where’ at the same time. It’s a window into processes happening at the millisecond and millimeter scale,” says Aude Oliva, a principal research scientist in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).

Oliva is the senior author of a paper describing the findings in the Jan. 26 issue of Nature Neuroscience. Lead author of the paper is CSAIL postdoc Radoslaw Cichy. Dimitrios Pantazis, a research scientist at MIT’s McGovern Institute for Brain Research, is also an author of the paper.

When and where

Until now, scientists have been able to observe the location or timing of human brain activity at high resolution, but not both, because different imaging techniques are not easily combined. The most commonly used type of brain scan, functional magnetic resonance imaging (fMRI), measures changes in blood flow, revealing which parts of the brain are involved in a particular task. However, it works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics.

Another imaging technique, known as magnetoencephalography (MEG), uses an array of hundreds of sensors encircling the head to measure magnetic fields produced by neuronal activity in the brain. These sensors offer a dynamic portrait of brain activity over time, down to the millisecond, but do not tell the precise location of the signals.

To combine the time and location information generated by these two scanners, the researchers used a computational technique called representational similarity analysis, which relies on the fact that two similar objects (such as two human faces) that provoke similar signals in fMRI will also produce similar signals in MEG. This method has been used before to link fMRI with recordings of neuronal electrical activity in monkeys, but the MIT researchers are the first to use it to link fMRI and MEG data from human subjects.

In the study, the researchers scanned 16 human volunteers as they looked at a series of 92 images, including faces, animals, and natural and manmade objects. Each image was shown for half a second.

“We wanted to measure how visual information flows through the brain. It’s just pure automatic machinery that starts every time you open your eyes, and it’s incredibly fast,” Cichy says. “This is a very complex process, and we have not yet looked at higher cognitive processes that come later, such as recalling thoughts and memories when you are watching objects.”

Each subject underwent the test multiple times — twice in an fMRI scanner and twice in an MEG scanner — giving the researchers a huge set of data on the timing and location of brain activity. All of the scanning was done at the Athinoula A. Martinos Imaging Center at the McGovern Institute.

Millisecond by millisecond

By analyzing this data, the researchers produced a timeline of the brain’s object-recognition pathway that is very similar to results previously obtained by recording electrical signals in the visual cortex of monkeys, a technique that is extremely accurate but too invasive to use in humans.

About 50 milliseconds after subjects saw an image, visual information entered a part of the brain called the primary visual cortex, or V1, which recognizes basic elements of a shape, such as whether it is round or elongated. The information then flowed to the inferotemporal cortex, where the brain identified the object as early as 120 milliseconds. Within 160 milliseconds, all objects had been classified into categories such as plant or animal.

The MIT team’s strategy “provides a rich new source of evidence on this highly dynamic process,” says Nikolaus Kriegeskorte, a principal investigator in cognition and brain sciences at Cambridge University.

“The combination of MEG and fMRI in humans is no surrogate for invasive animal studies with techniques that simultaneously have high spatial and temporal precision, but Cichy et al. come closer to characterizing the dynamic emergence of representational geometries across stages of processing in humans than any previous work. The approach will be useful for future studies elucidating other perceptual and cognitive processes,” says Kriegeskorte, who was not part of the research team.

The MIT researchers are now using representational similarity analysis to study the accuracy of computer models of vision by comparing brain scan data with the models’ predictions of how vision works.

Using this approach, scientists should also be able to study how the human brain analyzes other types of information such as motor, verbal, or sensory signals, the researchers say. It could also shed light on processes that underlie conditions such as memory disorders or dyslexia, and could benefit patients suffering from paralysis or neurodegenerative diseases.

“This is the first time that MEG and fMRI have been connected in this way, giving us a unique perspective,” Pantazis says. “We now have the tools to precisely map brain function both in space and time, opening up tremendous possibilities to study the human brain.”

The research was funded by the National Eye Institute, the National Science Foundation, and a Feodor Lynen Research Fellowship from the Humboldt Foundation.

Even when test scores go up, some cognitive abilities don’t

To evaluate school quality, states require students to take standardized tests; in many cases, passing those tests is necessary to receive a high-school diploma. These high-stakes tests have also been shown to predict students’ future educational attainment and adult employment and income.

Such tests are designed to measure the knowledge and skills that students have acquired in school — what psychologists call “crystallized intelligence.” However, schools whose students have the highest gains on test scores do not produce similar gains in “fluid intelligence” — the ability to analyze abstract problems and think logically — according to a new study from MIT neuroscientists working with education researchers at Harvard University and Brown University.

In a study of nearly 1,400 eighth-graders in the Boston public school system, the researchers found that some schools have successfully raised their students’ scores on the Massachusetts Comprehensive Assessment System (MCAS). However, those schools had almost no effect on students’ performance on tests of fluid intelligence skills, such as working memory capacity, speed of information processing, and ability to solve abstract problems.

“Our original question was this: If you have a school that’s effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences, and senior author of a forthcoming Psychological Science paper describing the findings.

Instead, the researchers found that educational practices designed to raise knowledge and boost test scores do not improve fluid intelligence. “It doesn’t seem like you get these skills for free in the way that you might hope, despite learning a lot by being a good student,” says Gabrieli, who is also a member of MIT’s McGovern Institute for Brain Research.

Measuring cognition

This study grew out of a larger effort to find measures beyond standardized tests that can predict long-term success for students. “As we started that study, it struck us that there’s been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,” Gabrieli says.

The data for the Psychological Science study came from students attending traditional, charter, and exam schools in Boston. Some of those schools have had great success improving their students’ MCAS scores — a boost that studies have found also translates to better performance on the SAT and Advanced Placement tests.

The researchers calculated how much of the variation in MCAS scores was due to the school that students attended. For MCAS scores in English, schools accounted for 24 percent of the variation, and they accounted for 34 percent of the math MCAS variation. However, the schools accounted for very little of the variation in fluid cognitive skills — less than 3 percent for all three skills combined.

In one example of a test of fluid reasoning, students were asked to choose which of six pictures completed the missing pieces of a puzzle — a task requiring integration of information such as shape, pattern, and orientation.

“It’s not always clear what dimensions you have to pay attention to get the problem correct. That’s why we call it fluid, because it’s the application of reasoning skills in novel contexts,” says Amy Finn, an MIT postdoc and lead author of the paper.

Even stronger evidence came from a comparison of about 200 students who had entered a lottery for admittance to a handful of Boston’s oversubscribed charter schools, many of which achieve strong improvement in MCAS scores. The researchers found that students who were randomly selected to attend high-performing charter schools did significantly better on the math MCAS than those who were not chosen, but there was no corresponding increase in fluid intelligence scores.

However, the researchers say their study is not about comparing charter schools and district schools. Rather, the study showed that while schools of both types varied in their impact on test scores, they did not vary in their impact on fluid cognitive skills.

“What’s nice about this study is it seems to narrow down the possibilities of what educational interventions are achieving,” says Daniel Willingham, a professor of psychology at the University of Virginia who was not part of the research team. “We’re usually primarily concerned with outcomes in schools, but the underlying mechanisms are also important.”

The researchers plan to continue tracking these students, who are now in 10th grade, to see how their academic performance and other life outcomes evolve. They have also begun to participate in a new study of high school seniors to track how their standardized test scores and cognitive abilities influence their rates of college attendance and graduation.

Implications for education

Gabrieli notes that the study should not be interpreted as critical of schools that are improving their students’ MCAS scores. “It’s valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive,” he says.

He hopes that the findings will encourage educational policymakers to consider adding practices that enhance cognitive skills. Although many studies have shown that students’ fluid cognitive skills predict their academic performance, such skills are seldom explicitly taught.

“Schools can improve crystallized abilities, and now it might be a priority to see if there are some methods for enhancing the fluid ones as well,” Gabrieli says.

Some studies have found that educational programs that focus on improving memory, attention, executive function, and inductive reasoning can boost fluid intelligence, but there is still much disagreement over what programs are consistently effective.

The research was a collaboration with the Center for Education Policy Research at Harvard University, Transforming Education, and Brown University, and was funded by the Bill and Melinda Gates Foundation and the National Institutes of Health.

Brain scans may help diagnose dyslexia

About 10 percent of the U.S. population suffers from dyslexia, a condition that makes learning to read difficult. Dyslexia is usually diagnosed around second grade, but the results of a new study from MIT could help identify those children before they even begin reading, so they can be given extra help earlier.

The study, done with researchers at Boston Children’s Hospital, found a correlation between poor pre-reading skills in kindergartners and the size of a brain structure that connects two language-processing areas.

Previous studies have shown that in adults with poor reading skills, this structure, known as the arcuate fasciculus, is smaller and less organized than in adults who read normally. However, it was unknown if these differences cause reading difficulties or result from lack of reading experience.

“We were very interested in looking at children prior to reading instruction and whether you would see these kinds of differences,” says John Gabrieli, the Grover M. Hermann Professor of Health Sciences and Technology, professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Nadine Gaab, an assistant professor of pediatrics at Boston Children’s Hospital, are the senior authors of a paper describing the results in the Aug. 14 issue of the Journal of Neuroscience. Lead authors of the paper are MIT postdocs Zeynep Saygin and Elizabeth Norton.

The path to reading

The new study is part of a larger effort involving approximately 1,000 children at schools throughout Massachusetts and Rhode Island. At the beginning of kindergarten, children whose parents give permission to participate are assessed for pre-reading skills, such as being able to put words together from sounds.

“From that, we’re able to provide — at the beginning of kindergarten — a snapshot of how that child’s pre-reading abilities look relative to others in their classroom or other peers, which is a real benefit to the child’s parents and teachers,” Norton says.

The researchers then invite a subset of the children to come to MIT for brain imaging. The Journal of Neuroscience study included 40 children who had their brains scanned using a technique known as diffusion-weighted imaging, which is based on magnetic resonance imaging (MRI).

This type of imaging reveals the size and organization of the brain’s white matter — bundles of nerves that carry information between brain regions. The researchers focused on three white-matter tracts associated with reading skill, all located on the left side of the brain: the arcuate fasciculus, the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF).

When comparing the brain scans and the results of several different types of pre-reading tests, the researchers found a correlation between the size and organization of the arcuate fasciculus and performance on tests of phonological awareness — the ability to identify and manipulate the sounds of language.

Phonological awareness can be measured by testing how well children can segment sounds, identify them in isolation, and rearrange them to make new words. Strong phonological skills have previously been linked with ease of learning to read. “The first step in reading is to match the printed letters with the sounds of letters that you know exist in the world,” Norton says.

The researchers also tested the children on two other skills that have been shown to predict reading ability — rapid naming, which is the ability to name a series of familiar objects as quickly as you can, and the ability to name letters. They did not find any correlation between these skills and the size or organization of the white-matter structures scanned in this study.

Early intervention

The left arcuate fasciculus connects Broca’s area, which is involved in speech production, and Wernicke’s area, which is involved in understanding written and spoken language. A larger and more organized arcuate fasciculus could aid in communication between those two regions, the researchers say.

Gabrieli points out that the structural differences found in the study don’t necessarily reflect genetic differences; environmental influences could also be involved. “At the moment when the children arrive at kindergarten, which is approximately when we scan them, we don’t know what factors lead to these brain differences,” he says.

The researchers plan to follow three waves of children as they progress to second grade and evaluate whether the brain measures they have identified predict poor reading skills.

“We don’t know yet how it plays out over time, and that’s the big question: Can we, through a combination of behavioral and brain measures, get a lot more accurate at seeing who will become a dyslexic child, with the hope that that would motivate aggressive interventions that would help these children right from the start, instead of waiting for them to fail?” Gabrieli says.

For at least some dyslexic children, offering extra training in phonological skills can help them improve their reading skills later on, studies have shown.

The research was funded by the National Institutes of Health, the Poitras Center for Affective Disorders Research, the Ellison Medical Foundation and the Halis Family Foundation.

Brain’s language center has multiple roles

A century and a half ago, French physician Pierre Paul Broca found that patients with damage to part of the brain’s frontal lobe were unable to speak more than a few words. Later dubbed Broca’s area, this region is believed to be critical for speech production and some aspects of language comprehension.

However, in recent years neuroscientists have observed activity in Broca’s area when people perform cognitive tasks that have nothing to do with language, such as solving math problems or holding information in working memory. Those findings have stimulated debate over whether Broca’s area is specific to language or plays a more general role in cognition.

A new study from MIT may help resolve this longstanding question. The researchers, led by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, found that Broca’s area actually consists of two distinct subunits. One of these focuses selectively on language processing, while the other is part of a brainwide network that appears to act as a central processing unit for general cognitive functions.

“I think we’ve shown pretty convincingly that there are two distinct bits that we should not be treating as a single region, and perhaps we shouldn’t even be talking about “Broca’s area” because it’s not a functional unit,” says Evelina Fedorenko, a research scientist in Kanwisher’s lab and lead author of the new study, which recently appeared in the journal Current Biology.

Kanwisher and Fedorenko are members of MIT’s Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research. John Duncan, a professor of neuroscience at the Cognition and Brain Sciences Unit of the Medical Research Council in the United Kingdom, is also an author of the paper.

A general role

Broca’s area is located in the left inferior frontal cortex, above and behind the left eye. For this study, the researchers set out to pinpoint the functions of distinct sections of Broca’s area by scanning subjects with functional magnetic resonance imaging (fMRI) as they performed a variety of cognitive tasks.

To locate language-selective areas, the researchers asked subjects to read either meaningful sentences or sequences of nonwords. A subset of Broca’s area lit up much more when the subjects processed meaningful sentences than when they had to interpret nonwords.

The researchers then measured brain activity as the subjects performed easy and difficult versions of general cognitive tasks, such as doing a math problem or holding a set of locations in memory. Parts of Broca’s area lit up during the more demanding versions of those tasks. Critically, however, these regions were spatially distinct from the regions involved in the language task.

These data allowed the researchers to map, for each subject, two distinct regions of Broca’s area — one selectively involved in language, the other involved in responding to many demanding cognitive tasks. The general region surrounds the language region, but the exact shapes and locations of the borders between the two vary from person to person.

The general-function region of Broca’s area appears to be part of a larger network sometimes called the multiple demand network, which is active when the brain is tackling a challenging task that requires a great deal of focus. This network is distributed across frontal and parietal lobes in both hemispheres of the brain, and all of its components appear to communicate with one another. The language-selective section of Broca’s area also appears to be part of a larger network devoted to language processing, spread throughout the brain’s left hemisphere.

Mapping functions

The findings provide evidence that Broca’s area should not be considered to have uniform functionality, says Peter Hagoort, a professor of cognitive neuroscience at Radboud University Nijmegen in the Netherlands. Hagoort, who was not involved in this study, adds that more work is needed to determine whether the language-selective areas might also be involved in any other aspects of cognitive function. “For instance, the language-selective region might play a role in the perception of music, which was not tested in the current study,” he says.

The researchers are now trying to determine how the components of the language network and the multiple demand network communicate internally, and how the two networks communicate with each other. They also hope to further investigate the functions of the two components of Broca’s area.

“In future studies, we should examine those subregions separately and try to characterize them in terms of their contribution to various language processes and other cognitive processes,” Fedorenko says.

The team is also working with scientists at Massachusetts General Hospital to study patients with a form of neurodegeneration that gradually causes loss of the ability to speak and understand language. This disorder, known as primary progressive aphasia, appears to selectively target the language-selective network, including the language component of Broca’s area.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Ellison Medical Foundation and the U.K. Medical Research Council.

Predicting how patients respond to therapy

Social anxiety is usually treated with either cognitive behavioral therapy or medications. However, it is currently impossible to predict which treatment will work best for a particular patient. The team of researchers from MIT, Boston University (BU) and Massachusetts General Hospital (MGH) found that the effectiveness of therapy could be predicted by measuring patients’ brain activity as they looked at photos of faces, before the therapy sessions began.

The findings, published this week in the Archives of General Psychiatry, may help doctors more accurately choose treatments for social anxiety disorder, which is estimated to affect around 15 million people in the United States.

“Our vision is that some of these measures might direct individuals to treatments that are more likely to work for them,” says John Gabrieli, the Grover M. Hermann Professor of Brain and Cognitive Sciences at MIT, a member of the McGovern Institute for Brain Research and senior author of the paper.

Lead authors of the paper are MIT postdoc Oliver Doehrmann and Satrajit Ghosh, a research scientist in the McGovern Institute.

Choosing treatments

Sufferers of social anxiety disorder experience intense fear in social situations, interfering with their ability to function in daily life. Cognitive behavioral therapy aims to change the thought and behavior patterns that lead to anxiety. For social anxiety disorder patients, that might include learning to reverse the belief that others are watching or judging them.

The new paper is part of a larger study that MGH and BU recently ran on cognitive behavioral therapy for social anxiety, led by Mark Pollack, director of the Center for Anxiety and Traumatic Stress Disorders at MGH, and Stefan Hofmann, director of the Social Anxiety Program at BU.

“This was a chance to ask if these brain measures, taken before treatment, would be informative in ways above and beyond what physicians can measure now, and determine who would be responsive to this treatment,” Gabrieli says.

Currently doctors might choose a treatment based on factors such as ease of taking pills versus going to therapy, the possibility of drug side effects, or what the patients’ insurance will cover. “From a science perspective there’s very little evidence about which treatment is optimal for a person,” Gabrieli says.

The researchers used functional magnetic resonance imaging (fMRI) to image the brains of patients before and after treatment. There have been many imaging studies showing brain differences between healthy people and patients with neuropsychiatric disorders, but so far imaging has not been established as a way to predict patient response to particular treatments.

Measuring brain activity

In the new study, the researchers measured differences in brain activity as patients looked at images of angry or neutral faces. After 12 weeks of cognitive behavioral therapy, patients’ social anxiety levels were tested. The researchers found that patients who had shown a greater difference in activity in high-level visual processing areas during the face-response task showed the most improvement after therapy.

The findings are an important step towards improving doctors’ ability to choose the right treatment for psychiatric disorders, says Greg Siegle, associate professor of psychiatry at the University of Pittsburgh. “It’s really critical that somebody do this work, and they did it very well,” says Siegle, who was not part of the research team. “It moves the field forward, and brings psychology into more of a rigorous science, using neuroscience to distinguish between clinical cases that at first appear homogeneous.”

Gabrieli says it’s unclear why activity in brain regions involved with visual processing would be a good predictor of treatment outcome. One possibility is that patients who benefited more were those whose brains were already adept at segregating different types of experiences, Gabrieli says.

The researchers are now planning a follow-up study to investigate whether brain scans can predict differences in response between cognitive behavioral therapy and drug treatment.

“Right now, all by itself, we’re just giving somebody encouraging or discouraging news about the likely outcome of therapy,” Gabrieli says. “The really valuable thing would be if it turns out to be differentially sensitive to different treatment choices.”

The research was funded by the Poitras Center for Affective Disorders Research and the National Institute of Mental Health.

Thinking about others is not child’s play

When you try to read other people’s thoughts, or guess why they are behaving a certain way, you employ a skill known as theory of mind. This skill, as measured by false-belief tests, takes time to develop: In children, it doesn’t start appearing until the age of 4 or 5.

Several years ago, MIT neuroscientist Rebecca Saxe showed that in adults, theory of mind is seated in a specific brain region known as the right temporo-parietal junction (TPJ). Saxe and colleagues at MIT have now shown how brain activity in the TPJ changes as children learn to reason about others’ thoughts and feelings.

The findings suggest that the right TPJ becomes more specific to theory of mind as children age, taking on adult patterns of activity over time. The researchers also showed that the more selectively the right TPJ is activated when children listen to stories about other people’s thoughts, the better those children perform in tasks that require theory of mind.

The paper, published in the July 31 online edition of the journal Child Development, lays the groundwork for exploring theory-of-mind impairments in autistic children, says Hyowon Gweon, a graduate student in Saxe’s lab and lead author of the paper.

“Given that we know this is what typically developing kids show, the next question to ask is how it compares to autistic children who exhibit marked impairments in their ability to think about other people’s minds,” Gweon says. “Do they show differences from typically developing kids in their neural activity?”

Saxe, an associate professor of brain and cognitive sciences and associate member of MIT’s McGovern Institute for Brain Research, is senior author of the Child Development paper. Other authors are Marina Bedny, a postdoc in Saxe’s lab, and David Dodell-Feder, a graduate student at Harvard University.

Tracking theory of mind

The classic test for theory of mind is the false-belief test, sometimes called the Sally-Anne test. Experimenters often use dolls or puppets to perform a short skit: Sally takes a marble and hides it in her basket, then leaves the room. Anne then removes the marble and puts it in her own box. When Sally returns, the child watching the skit is asked: Where will Sally look for her marble?

Children with well-developed theory of mind realize that Sally will look where she thinks the marble is: her own basket. However, before children develop this skill, they don’t realize that Sally’s beliefs may not correspond to reality. Therefore, they believe she will look for the marble where it actually is, in Anne’s box.

Previous studies have shown that children start making accurate predictions in the false belief test around age 4 — but this happens much later, if ever, in autistic children.

In this study, the researchers used functional magnetic resonance imaging (fMRI) to look for a link between the development of theory of mind and changes in neural activity in the TPJ. They studied 20 children, ranging from 5 to 11 years old.

Each child participated in two sets of experiments. First, the child was scanned in the MRI machine as he or she listened to different types of stories. One type focused on people’s mental states, another also focused on people but only on their physical appearances or actions, and a third type of story focused on physical objects.

The researchers measured activity across the brain as the children listened to different stories. By subtracting neural activity as they listen to stories about physical states from activity as they listen to stories about people’s mental states, the researchers can determine which brain regions are exclusive to interpreting people’s mental states.

In younger children, both the left and right TPJ were active in response to stories about people’s mental states, but they were also active when the children listened to stories about people’s appearances or actions. However, in older children, both regions became more specifically tuned to interpreting people’s thoughts and emotions, and were no longer responsive to people’s appearances or actions.

For the second task, done outside of the scanner, the researchers gave children tests similar to the classic Sally-Anne test, as well as harder questions that required making moral judgments, to measure their theory-of-mind abilities. They found that the degree to which activity in the right TPJ was specific to others’ mental states correlated with the children’s performance in theory-of-mind tasks.

Kristin Lagattuta, an associate professor of psychology at the University of California at Davis, says the paper makes an important contribution to understanding how theory of mind develops in older children. “Getting more insight into the neural basis of the behavioral development we’re seeing at these ages is exciting,” says Lagattuta, who was not involved in the research.

In an ongoing study of autistic children undergoing the same type of tests, the researchers hope to learn more about the neural basis of the theory-of-mind impairments seen in autistic children.

So little is known about differences in neural mechanisms that contribute to these kinds of impairments,” Gweon says. “Understanding the developmental changes in brain regions related to theory of mind is going to be critical to think of measures that can help them in the real world.”

The research was funded by the Ellison Medical Foundation, the Packard Foundation, the John Merck Scholars Program, a National Science Foundation Career Award and an Ewha 21st Century Scholarship.

Detecting the brain’s magnetic signals with MEG

Magnetoencephalography (MEG) is a noninvasive technique for measuring neuronal activity in the human brain. Electrical currents flowing through neurons generate weak magnetic fields that can be recorded at the surface of the head using very sensitive magnetic detectors known as superconducting quantum interference devices (SQUIDs).

MEG is a purely passive method that relies on detection of signals that are produced naturally by the brain. It does not involve exposure to radiation or strong magnetic fields, and there are no known hazards associated with MEG.

MEG was developed at MIT in the early 1970s by physicist David Cohen. Photo: David Cohen

Magnetic signals from the brain are very small compared to the magnetic fluctuations that are produced by interfering sources such as nearby electrical equipment or moving metal objects. Therefore MEG scans are typically performed within a special magnetically shielded room that blocks this external interference.

It is fitting that MIT should have a state-of-the-art MEG scanner, since the MEG technology was pioneered by David Cohen in the early 1970s while he was a member of MIT’s Francis Bitter Magnet Laboratory.

MEG can detect the timing of magnetic signals with millisecond precision. This is the timescale on which neurons communicate, and MEG is thus well suited to measuring the rapid signals that reflect communication between different parts of the human brain.

MEG is complementary to other brain imaging modalities such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), which depend on changes in blood flow, and which have higher spatial resolution but much lower temporal resolution than MEG.

Our MEG scanner, an Elekta Neuromag Triux with 306 channels plus 128 channels for EEG, was installed in 2011 and is the first of its kind in North America. It is housed within a magnetically shielded room to reduce background noise.

The MEG lab is part of the Martinos Imaging Center at MIT, operating as a core facility, and accessible to all members of the local research community. Potential users should contact Dimitrios Pantazis for more information.

The MEG Lab was made possible through a grant from the National Science Foundation and through the generous support of the following donors: Thomas F. Peterson, Jr. ’57; Edward and Kay Poitras; The Simons Foundation; and an anonymous donor.

Faces have a special place in the brain

Are you tempted to trade in last year’s digital camera for a newer model with even more megapixels? Researchers who make images of the human brain have the same obsession with increasing their pixel count, which increases the sharpness (or “spatial resolution”) of their images. And improvements in spatial resolution are happening as fast in brain imaging research as they are in digital camera technology.

Nancy Kanwisher, Rebecca Frye Schwarzlose and Christopher Baker at the McGovern Institute for Brain Research at MIT are now using their higher-resolution scans to produce much more detailed images of the brain than were possible just a couple years ago. Just as “hi-def” TV shows clearer views of a football game, these finely grained images are providing new answers to some very old questions in brain research.

One such question hinges on whether the brain is comprised of highly specialized parts, each optimized to conduct a single, very specific function. Or is it instead a general-purpose device that handles many tasks but specializes in none?

Using the higher-resolution scans, the Kanwisher team now provides some of the strongest evidence ever reported for extreme specialization. Their study appeared in the Nov. 23 issue of the Journal of Neuroscience.

The study focuses on face recognition, long considered an example of brain specialization. In the 1990s, researchers including Kanwisher identified a region known as the fusiform face area (FFA) as a potential brain center for face recognition. They pointed to evidence from brain-imaging experiments, and to the fact that people with damage to this brain region cannot recognize faces, even those of their family and closest friends.

However, more recent brain-imaging experiments have challenged this claimed specialization by showing that this region also responds strongly when people see images of bodies and body parts, not just faces. The new study now answers this challenge and supports the original specialization theory.

Schwarzlose suspected that the strong response of the face area to both faces and bodies might result from the blurring together of two distinct but neighboring brain regions that are too close together to distinguish at standard scanning resolutions.

To test this idea, Schwarzlose and her colleagues increased the resolution of their images (like increasing the megapixels on a digital camera) ten-fold to get sharper images of brain function. Indeed, at this higher resolution they could clearly distinguish two neighboring regions. One was primarily active when people saw faces (not bodies), and the other when people saw bodies (not faces).

This finding supports the original claim that the face area is in fact dedicated exclusively to face processing. The results further demonstrate a similar degree of specialization for the new “body region” next door.

The team’s new discovery highlights the importance of improved spatial resolution in studying the structure of the human brain. Just as a higher megapixel digital camera can show greater detail, new brain imaging methods are revealing the finer-grained structure of the human brain. Schwarzlose and her colleagues plan to use the new scanning methods to look for even finer levels of organization within the newly distinguished face and body areas. They also want to figure out how and why the brain regions for faces and bodies land next to each other in the first place.

Kanwisher is the Ellen Swallow Richards Professor of Cognitive Neuroscience. Her colleagues on this work are Schwarzlose, a graduate student in brain and cognitive sciences, and Baker, a postdoctoral researcher in the department.

The research was supported by the National Institutes of Health, the National Center for Research Resources, the Mind Institute, and the National Science Foundation’s Graduate Research Fellowship Program.