Study finds altered brain chemistry in people with autism

MIT and Harvard University neuroscientists have found a link between a behavioral symptom of autism and reduced activity of a neurotransmitter whose job is to dampen neuron excitation. The findings suggest that drugs that boost the action of this neurotransmitter, known as GABA, may improve some of the symptoms of autism, the researchers say.

Brain activity is controlled by a constant interplay of inhibition and excitation, which is mediated by different neurotransmitters. GABA is one of the most important inhibitory neurotransmitters, and studies of animals with autism-like symptoms have found reduced GABA activity in the brain. However, until now, there has been no direct evidence for such a link in humans.

“This is the first connection in humans between a neurotransmitter in the brain and an autistic behavioral symptom,” says Caroline Robertson, a postdoc at MIT’s McGovern Institute for Brain Research and a junior fellow of the Harvard Society of Fellows. “It’s possible that increasing GABA would help to ameliorate some of the symptoms of autism, but more work needs to be done.”

Robertson is the lead author of the study, which appears in the Dec. 17 online edition of Current Biology. The paper’s senior author is Nancy Kanwisher, the Walter A. Rosenblith Professor of Brain and Cognitive Sciences and a member of the McGovern Institute. Eva-Maria Ratai, an assistant professor of radiology at Massachusetts General Hospital, also contributed to the research.

Too little inhibition

Many symptoms of autism arise from hypersensitivity to sensory input. For example, children with autism are often very sensitive to things that wouldn’t bother other children as much, such as someone talking elsewhere in the room, or a scratchy sweater. Scientists have speculated that reduced brain inhibition might underlie this hypersensitivity by making it harder to tune out distracting sensations.

In this study, the researchers explored a visual task known as binocular rivalry, which requires brain inhibition and has been shown to be more difficult for people with autism. During the task, researchers show each participant two different images, one to each eye. To see the images, the brain must switch back and forth between input from the right and left eyes.

For the participant, it looks as though the two images are fading in and out, as input from each eye takes its turn inhibiting the input coming in from the other eye.

“Everybody has a different rate at which the brain naturally oscillates between these two images, and that rate is thought to map onto the strength of the inhibitory circuitry between these two populations of cells,” Robertson says.

She found that nonautistic adults switched back and forth between the images nine times per minute, on average, and one of the images fully suppressed the other about 70 percent of the time. However, autistic adults switched back and forth only half as often as nonautistic subjects, and one of the images fully suppressed the other only about 50 percent of the time.

Performance on this task was also linked to patients’ scores on a clinical evaluation of communication and social interaction used to diagnose autism: Worse symptoms correlated with weaker inhibition during the visual task.

The researchers then measured GABA activity using a technique known as magnetic resonance spectroscopy, as autistic and typical subjects performed the binocular rivalry task. In nonautistic participants, higher levels of GABA correlated with a better ability to suppress the nondominant image. But in autistic subjects, there was no relationship between performance and GABA levels. This suggests that GABA is present in the brain but is not performing its usual function in autistic individuals, Robertson says.

“GABA is not reduced in the autistic brain, but the action of this inhibitory pathway is reduced,” she says. “The next step is figuring out which part of the pathway is disrupted.”

“This is a really great piece of work,” says Richard Edden, an associate professor of radiology at the Johns Hopkins University School of Medicine. “The role of inhibitory dysfunction in autism is strongly debated, with different camps arguing for elevated and reduced inhibition. This kind of study, which seeks to relate measures of inhibition directly to quantitative measures of function, is what we really to need to tease things out.”

Early diagnosis

In addition to offering a possible new drug target, the new finding may also help researchers develop better diagnostic tools for autism, which is now diagnosed by evaluating children’s social interactions. To that end, Robertson is investigating the possibility of using EEG scans to measure brain responses during the binocular rivalry task.

“If autism does trace back on some level to circuitry differences that affect the visual cortex, you can measure those things in a kid who’s even nonverbal, as long as he can see,” she says. “We’d like it to move toward being useful for early diagnostic screenings.”

Music in the brain

Scientists have long wondered if the human brain contains neural mechanisms specific to music perception. Now, for the first time, MIT neuroscientists have identified a neural population in the human auditory cortex that responds selectively to sounds that people typically categorize as music, but not to speech or other environmental sounds.

“It has been the subject of widespread speculation,” says Josh McDermott, the Frederick A. and Carole J. Middleton Assistant Professor of Neuroscience in the Department of Brain and Cognitive Sciences at MIT. “One of the core debates surrounding music is to what extent it has dedicated mechanisms in the brain and to what extent it piggybacks off of mechanisms that primarily serve other functions.”

The finding was enabled by a new method designed to identify neural populations from functional magnetic resonance imaging (fMRI) data. Using this method, the researchers identified six neural populations with different functions, including the music-selective population and another set of neurons that responds selectively to speech.

“The music result is notable because people had not been able to clearly see highly selective responses to music before,” says Sam Norman-Haignere, a postdoc at MIT’s McGovern Institute for Brain Research.

“Our findings are hard to reconcile with the idea that music piggybacks entirely on neural machinery that is optimized for other functions, because the neural responses we see are highly specific to music,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research.

Norman-Haignere is the lead author of a paper describing the findings in the Dec. 16 online edition of Neuron. McDermott and Kanwisher are the paper’s senior authors.

Mapping responses to sound

For this study, the researchers scanned the brains of 10 human subjects listening to 165 natural sounds, including different types of speech and music, as well as everyday sounds such as footsteps, a car engine starting, and a telephone ringing.

The brain’s auditory system has proven difficult to map, in part because of the coarse spatial resolution of fMRI, which measures blood flow as an index of neural activity. In fMRI, “voxels” — the smallest unit of measurement — reflect the response of hundreds of thousands or millions of neurons.

“As a result, when you measure raw voxel responses you’re measuring something that reflects a mixture of underlying neural responses,” Norman-Haignere says.

To tease apart these responses, the researchers used a technique that models each voxel as a mixture of multiple underlying neural responses. Using this method, they identified six neural populations, each with a unique response pattern to the sounds in the experiment, that best explained the data.

“What we found is we could explain a lot of the response variation across tens of thousands of voxels with just six response patterns,” Norman-Haignere says.

One population responded most to music, another to speech, and the other four to different acoustic properties such as pitch and frequency.

The key to this advance is the researchers’ new approach to analyzing fMRI data, says Josef Rauschecker, a professor of physiology and biophysics at Georgetown University.

“The whole field is interested in finding specialized areas like those that have been found in the visual cortex, but the problem is the voxel is just not small enough. You have hundreds of thousands of neurons in a voxel, and how do you separate the information they’re encoding? This is a study of the highest caliber of data analysis,” says Rauschecker, who was not part of the research team.

Layers of sound processing

The four acoustically responsive neural populations overlap with regions of “primary” auditory cortex, which performs the first stage of cortical processing of sound. Speech and music-selective neural populations lie beyond this primary region.

“We think this provides evidence that there’s a hierarchy of processing where there are responses to relatively simple acoustic dimensions in this primary auditory area. That’s followed by a second stage of processing that represents more abstract properties of sound related to speech and music,” Norman-Haignere says.

The researchers believe there may be other brain regions involved in processing music, including its emotional components. “It’s inappropriate at this point to conclude that this is the seat of music in the brain,” McDermott says. “This is where you see most of the responses within the auditory cortex, but there’s a lot of the brain that we didn’t even look at.”

Kanwisher also notes that “the existence of music-selective responses in the brain does not imply that the responses reflect an innate brain system. An important question for the future will be how this system arises in development: How early it is found in infancy or childhood, and how dependent it is on experience?”

The researchers are now investigating whether the music-selective population identified in this study contains subpopulations of neurons that respond to different aspects of music, including rhythm, melody, and beat. They also hope to study how musical experience and training might affect this neural population.

 

How the brain pays attention

Picking out a face in the crowd is a complicated task: Your brain has to retrieve the memory of the face you’re seeking, then hold it in place while scanning the crowd, paying special attention to finding a match.

A new study by MIT neuroscientists reveals how the brain achieves this type of focused attention on faces or other objects: A part of the prefrontal cortex known as the inferior frontal junction (IFJ) controls visual processing areas that are tuned to recognize a specific category of objects, the researchers report in the April 10 online edition of Science.

Scientists know much less about this type of attention, known as object-based attention, than spatial attention, which involves focusing on what’s happening in a particular location. However, the new findings suggest that these two types of attention have similar mechanisms involving related brain regions, says Robert Desimone, the Doris and Don Berkey Professor of Neuroscience, director of MIT’s McGovern Institute for Brain Research, and senior author of the paper.

“The interactions are surprisingly similar to those seen in spatial attention,” Desimone says. “It seems like it’s a parallel process involving different areas.”

In both cases, the prefrontal cortex — the control center for most cognitive functions — appears to take charge of the brain’s attention and control relevant parts of the visual cortex, which receives sensory input. For spatial attention, that involves regions of the visual cortex that map to a particular area within the visual field.

In the new study, the researchers found that IFJ coordinates with a brain region that processes faces, known as the fusiform face area (FFA), and a region that interprets information about places, known as the parahippocampal place area (PPA). The FFA and PPA were first identified in the human cortex by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT.

The IFJ has previously been implicated in a cognitive ability known as working memory, which is what allows us to gather and coordinate information while performing a task — such as remembering and dialing a phone number, or doing a math problem.

For this study, the researchers used magnetoencephalography (MEG) to scan human subjects as they viewed a series of overlapping images of faces and houses. Unlike functional magnetic resonance imaging (fMRI), which is commonly used to measure brain activity, MEG can reveal the precise timing of neural activity, down to the millisecond. The researchers presented the overlapping streams at two different rhythms — two images per second and 1.5 images per second — allowing them to identify brain regions responding to those stimuli.

“We wanted to frequency-tag each stimulus with different rhythms. When you look at all of the brain activity, you can tell apart signals that are engaged in processing each stimulus,” says Daniel Baldauf, a postdoc at the McGovern Institute and the lead author of the paper.

Each subject was told to pay attention to either faces or houses; because the houses and faces were in the same spot, the brain could not use spatial information to distinguish them. When the subjects were told to look for faces, activity in the FFA and the IFJ became synchronized, suggesting that they were communicating with each other. When the subjects paid attention to houses, the IFJ synchronized instead with the PPA.

The researchers also found that the communication was initiated by the IFJ and the activity was staggered by 20 milliseconds — about the amount of time it would take for neurons to electrically convey information from the IFJ to either the FFA or PPA. The researchers believe that the IFJ holds onto the idea of the object that the brain is looking for and directs the correct part of the brain to look for it.
Further bolstering this idea, the researchers used an MRI-based method to measure the white matter that connects different brain regions and found that the IFJ is highly connected with both the FFA and PPA.

Members of Desimone’s lab are now studying how the brain shifts its focus between different types of sensory input, such as vision and hearing. They are also investigating whether it might be possible to train people to better focus their attention by controlling the brain interactions  involved in this process.

“You have to identify the basic neural mechanisms and do basic research studies, which sometimes generate ideas for things that could be of practical benefit,” Desimone says. “It’s too early to say whether this training is even going to work at all, but it’s something that we’re actively pursuing.”

The research was funded by the National Institutes of Health and the National Science Foundation.

Brain’s language center has multiple roles

A century and a half ago, French physician Pierre Paul Broca found that patients with damage to part of the brain’s frontal lobe were unable to speak more than a few words. Later dubbed Broca’s area, this region is believed to be critical for speech production and some aspects of language comprehension.

However, in recent years neuroscientists have observed activity in Broca’s area when people perform cognitive tasks that have nothing to do with language, such as solving math problems or holding information in working memory. Those findings have stimulated debate over whether Broca’s area is specific to language or plays a more general role in cognition.

A new study from MIT may help resolve this longstanding question. The researchers, led by Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, found that Broca’s area actually consists of two distinct subunits. One of these focuses selectively on language processing, while the other is part of a brainwide network that appears to act as a central processing unit for general cognitive functions.

“I think we’ve shown pretty convincingly that there are two distinct bits that we should not be treating as a single region, and perhaps we shouldn’t even be talking about “Broca’s area” because it’s not a functional unit,” says Evelina Fedorenko, a research scientist in Kanwisher’s lab and lead author of the new study, which recently appeared in the journal Current Biology.

Kanwisher and Fedorenko are members of MIT’s Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research. John Duncan, a professor of neuroscience at the Cognition and Brain Sciences Unit of the Medical Research Council in the United Kingdom, is also an author of the paper.

A general role

Broca’s area is located in the left inferior frontal cortex, above and behind the left eye. For this study, the researchers set out to pinpoint the functions of distinct sections of Broca’s area by scanning subjects with functional magnetic resonance imaging (fMRI) as they performed a variety of cognitive tasks.

To locate language-selective areas, the researchers asked subjects to read either meaningful sentences or sequences of nonwords. A subset of Broca’s area lit up much more when the subjects processed meaningful sentences than when they had to interpret nonwords.

The researchers then measured brain activity as the subjects performed easy and difficult versions of general cognitive tasks, such as doing a math problem or holding a set of locations in memory. Parts of Broca’s area lit up during the more demanding versions of those tasks. Critically, however, these regions were spatially distinct from the regions involved in the language task.

These data allowed the researchers to map, for each subject, two distinct regions of Broca’s area — one selectively involved in language, the other involved in responding to many demanding cognitive tasks. The general region surrounds the language region, but the exact shapes and locations of the borders between the two vary from person to person.

The general-function region of Broca’s area appears to be part of a larger network sometimes called the multiple demand network, which is active when the brain is tackling a challenging task that requires a great deal of focus. This network is distributed across frontal and parietal lobes in both hemispheres of the brain, and all of its components appear to communicate with one another. The language-selective section of Broca’s area also appears to be part of a larger network devoted to language processing, spread throughout the brain’s left hemisphere.

Mapping functions

The findings provide evidence that Broca’s area should not be considered to have uniform functionality, says Peter Hagoort, a professor of cognitive neuroscience at Radboud University Nijmegen in the Netherlands. Hagoort, who was not involved in this study, adds that more work is needed to determine whether the language-selective areas might also be involved in any other aspects of cognitive function. “For instance, the language-selective region might play a role in the perception of music, which was not tested in the current study,” he says.

The researchers are now trying to determine how the components of the language network and the multiple demand network communicate internally, and how the two networks communicate with each other. They also hope to further investigate the functions of the two components of Broca’s area.

“In future studies, we should examine those subregions separately and try to characterize them in terms of their contribution to various language processes and other cognitive processes,” Fedorenko says.

The team is also working with scientists at Massachusetts General Hospital to study patients with a form of neurodegeneration that gradually causes loss of the ability to speak and understand language. This disorder, known as primary progressive aphasia, appears to selectively target the language-selective network, including the language component of Broca’s area.

The research was funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the Ellison Medical Foundation and the U.K. Medical Research Council.

Faces have a special place in the brain

Are you tempted to trade in last year’s digital camera for a newer model with even more megapixels? Researchers who make images of the human brain have the same obsession with increasing their pixel count, which increases the sharpness (or “spatial resolution”) of their images. And improvements in spatial resolution are happening as fast in brain imaging research as they are in digital camera technology.

Nancy Kanwisher, Rebecca Frye Schwarzlose and Christopher Baker at the McGovern Institute for Brain Research at MIT are now using their higher-resolution scans to produce much more detailed images of the brain than were possible just a couple years ago. Just as “hi-def” TV shows clearer views of a football game, these finely grained images are providing new answers to some very old questions in brain research.

One such question hinges on whether the brain is comprised of highly specialized parts, each optimized to conduct a single, very specific function. Or is it instead a general-purpose device that handles many tasks but specializes in none?

Using the higher-resolution scans, the Kanwisher team now provides some of the strongest evidence ever reported for extreme specialization. Their study appeared in the Nov. 23 issue of the Journal of Neuroscience.

The study focuses on face recognition, long considered an example of brain specialization. In the 1990s, researchers including Kanwisher identified a region known as the fusiform face area (FFA) as a potential brain center for face recognition. They pointed to evidence from brain-imaging experiments, and to the fact that people with damage to this brain region cannot recognize faces, even those of their family and closest friends.

However, more recent brain-imaging experiments have challenged this claimed specialization by showing that this region also responds strongly when people see images of bodies and body parts, not just faces. The new study now answers this challenge and supports the original specialization theory.

Schwarzlose suspected that the strong response of the face area to both faces and bodies might result from the blurring together of two distinct but neighboring brain regions that are too close together to distinguish at standard scanning resolutions.

To test this idea, Schwarzlose and her colleagues increased the resolution of their images (like increasing the megapixels on a digital camera) ten-fold to get sharper images of brain function. Indeed, at this higher resolution they could clearly distinguish two neighboring regions. One was primarily active when people saw faces (not bodies), and the other when people saw bodies (not faces).

This finding supports the original claim that the face area is in fact dedicated exclusively to face processing. The results further demonstrate a similar degree of specialization for the new “body region” next door.

The team’s new discovery highlights the importance of improved spatial resolution in studying the structure of the human brain. Just as a higher megapixel digital camera can show greater detail, new brain imaging methods are revealing the finer-grained structure of the human brain. Schwarzlose and her colleagues plan to use the new scanning methods to look for even finer levels of organization within the newly distinguished face and body areas. They also want to figure out how and why the brain regions for faces and bodies land next to each other in the first place.

Kanwisher is the Ellen Swallow Richards Professor of Cognitive Neuroscience. Her colleagues on this work are Schwarzlose, a graduate student in brain and cognitive sciences, and Baker, a postdoctoral researcher in the department.

The research was supported by the National Institutes of Health, the National Center for Research Resources, the Mind Institute, and the National Science Foundation’s Graduate Research Fellowship Program.