Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.

Full paper at PNAS

Learning from social isolation

“Livia Tomova, a postdoc in the Saxe Lab, recently completed a study about social isolation and its impact on the brain. Michelle Hung and I had a lot of exposure to her research in the lab. When “social distancing” measures hit MIT, we tried to process how the implementation of these policies would impact the landscape of our social lives.

We came up with some hypotheses and agreed that the coronavirus pandemic would fundamentally change life as we know it.

So we developed a survey to measure how the social behavior of MIT students, postdocs, and staff changes over the course of the pandemic. Our study is still in its very early stages, but it has been an incredibly fulfilling experience to be a part of Michelle’s development as a scientist.

Heather Kosakowski’s daughter in Woods Hole, Massachusetts. Photo: Heather Kosakowski

After the undergraduates left, graduate students were also strongly urged to leave graduate student housing. My daughter (age 11) and I live in a 28th-floor apartment and her school was canceled. One of my advisors, Nancy Kanwisher, had a vacant apartment in Woods Hole that she offered to let lab members stay in. As more and more resources for children were being closed or shut down, I decided to take her up on the offer. Wood’s Hole is my daughter’s absolute favorite place and I feel extremely lucky to have such a generous option. My daughter has been coping really well with all of these changes.

While my research is at an exciting stage, I miss being on campus with the students from my cohort and my lab mates and my weekly in-person meetings with my advisors. One way I’ve been coping with this reality is by listening to stories of other people’s experiences. We are all human and we are all in the midst of a pandemic but, we are all experiencing the pandemic in different ways. I find the diversity of our experience intriguing. I have been fortunate to have friends write stories about their experiences, so that I can post them on my blog. I only have a handful of stories right now but, it has been really fun for me to listen, and humbling for me to share each individual’s unique experience.”


Heather Kosakowski is a graduate student in the labs of Rebecca Saxe and Nancy Kanwisher where she studies the infant brain and the developmental origins of object recognition, language, and music. Heather is also a Marine Corps veteran and single mom who manages a blog that “ties together different aspects of my experience, past and present, with the hopes that it might make someone else out there feel less alone.”

#WeAreMcGovern

How dopamine drives brain activity

Using a specialized magnetic resonance imaging (MRI) sensor, MIT neuroscientists have discovered how dopamine released deep within the brain influences both nearby and distant brain regions.

Dopamine plays many roles in the brain, most notably related to movement, motivation, and reinforcement of behavior. However, until now it has been difficult to study precisely how a flood of dopamine affects neural activity throughout the brain. Using their new technique, the MIT team found that dopamine appears to exert significant effects in two regions of the brain’s cortex, including the motor cortex.

“There has been a lot of work on the immediate cellular consequences of dopamine release, but here what we’re looking at are the consequences of what dopamine is doing on a more brain-wide level,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering. Jasanoff is also an associate member of MIT’s McGovern Institute for Brain Research and the senior author of the study.

The MIT team found that in addition to the motor cortex, the remote brain area most affected by dopamine is the insular cortex. This region is critical for many cognitive functions related to perception of the body’s internal states, including physical and emotional states.

MIT postdoc Nan Li is the lead author of the study, which appears today in Nature.

Tracking dopamine

Like other neurotransmitters, dopamine helps neurons to communicate with each other over short distances. Dopamine holds particular interest for neuroscientists because of its role in motivation, addiction, and several neurodegenerative disorders, including Parkinson’s disease. Most of the brain’s dopamine is produced in the midbrain by neurons that connect to the striatum, where the dopamine is released.

For many years, Jasanoff’s lab has been developing tools to study how molecular phenomena such as neurotransmitter release affect brain-wide functions. At the molecular scale, existing techniques can reveal how dopamine affects individual cells, and at the scale of the entire brain, functional magnetic resonance imaging (fMRI) can reveal how active a particular brain region is. However, it has been difficult for neuroscientists to determine how single-cell activity and brain-wide function are linked.

“There have been very few brain-wide studies of dopaminergic function or really any neurochemical function, in large part because the tools aren’t there,” Jasanoff says. “We’re trying to fill in the gaps.”

About 10 years ago, his lab developed MRI sensors that consist of magnetic proteins that can bind to dopamine. When this binding occurs, the sensors’ magnetic interactions with surrounding tissue weaken, dimming the tissue’s MRI signal. This allows researchers to continuously monitor dopamine levels in a specific part of the brain.

In their new study, Li and Jasanoff set out to analyze how dopamine released in the striatum of rats influences neural function both locally and in other brain regions. First, they injected their dopamine sensors into the striatum, which is located deep within the brain and plays an important role in controlling movement. Then they electrically stimulated a part of the brain called the lateral hypothalamus, which is a common experimental technique for rewarding behavior and inducing the brain to produce dopamine.

Then, the researchers used their dopamine sensor to measure dopamine levels throughout the striatum. They also performed traditional fMRI to measure neural activity in each part of the striatum. To their surprise, they found that high dopamine concentrations did not make neurons more active. However, higher dopamine levels did make the neurons remain active for a longer period of time.

“When dopamine was released, there was a longer duration of activity, suggesting a longer response to the reward,” Jasanoff says. “That may have something to do with how dopamine promotes learning, which is one of its key functions.”

Long-range effects

After analyzing dopamine release in the striatum, the researchers set out to determine this dopamine might affect more distant locations in the brain. To do that, they performed traditional fMRI imaging on the brain while also mapping dopamine release in the striatum. “By combining these techniques we could probe these phenomena in a way that hasn’t been done before,” Jasanoff says.

The regions that showed the biggest surges in activity in response to dopamine were the motor cortex and the insular cortex. If confirmed in additional studies, the findings could help researchers understand the effects of dopamine in the human brain, including its roles in addiction and learning.

“Our results could lead to biomarkers that could be seen in fMRI data, and these correlates of dopaminergic function could be useful for analyzing animal and human fMRI,” Jasanoff says.

The research was funded by the National Institutes of Health and a Stanley Fahn Research Fellowship from the Parkinson’s Disease Foundation.

Uncovering the functional architecture of a historic brain area

In 1840 a patient named Leborgne was admitted to a hospital near Paris: he was only able repeat the word “Tan.” This loss of speech drew the attention of Paul Broca who, after Leborgne’s death, identified lesions in his frontal lobe in the left hemisphere. These results echoed earlier findings from French neurologist Marc Dax. Now known as “Broca’s area,” the roles of this brain region have been extended to mental functions far beyond speech articulation. So much so, that the underlying functional organization of Broca’s area has become a source of discussion and some confusion.

McGovern Investigator Ev Fedorenko is now calling, in a paper at Trends in Cognitive Sciences, for recognition that Broca’s area consists of functionally distinct, specialized regions, with one sub-region very much dedicated to language processing.

“Broca’s area is one of the first regions you learn about in introductory psychology and neuroscience classes, and arguably laid the foundation for human cognitive neuroscience,” explains Ev Fedorenko, who is also an assistant professor in MIT’s Department of Brain and Cognitive Sciences. “This patch of cortex and its connections with other brain areas and networks provides a microcosm for probing some core questions about the human brain.”

Broca’s area, shown in red. Image: Wikimedia

Language is a uniquely human capability, and thus the discovery of Broca’s area immediately captured the attention of researchers.

“Because language is universal across cultures, but unique to the human species, studying Broca’s area and constraining theories of language accordingly promises to provide a window into one of the central abilities that make humans so special,” explains co-author Idan Blank, a former postdoc at the McGovern Institute who is now an assistant professor of psychology at UCLA.

Function over form

Broca’s area is found in the posterior portion of the left inferior frontal gyrus (LIFG). Arguments and theories abound as to its function. Some consider the region as dedicated to language or syntactic processing, others argue that it processes multiple types of inputs, and still others argue it is working at a high level, implementing working memory and cognitive control. Is Broca’s area a highly specialized circuit, dedicated to the human-specific capacity for language and largely independent from the rest high-level cognition, or is it a CPU-like region, overseeing diverse aspects of the mind and orchestrating their operations?

“Patient investigations and neuroimaging studies have now associated Broca’s region with many processes,” explains Blank. “On the one hand, its language-related functions have expanded far beyond articulation, on the other, non-linguistic functions within Broca’s area—fluid intelligence and problem solving, working memory, goal-directed behavior, inhibition, etc.—are fundamental to ‘all of cognition.’”

While brain anatomy is a common path to defining subregions in Broca’s area, Fedorenko and Blank argue that instead this approach can muddy the water. In fact, the anatomy of the brain, in terms of cortical folds and visible landmarks that originally stuck out to anatomists, vary from individual to individual in terms of their alignment with the underlying functions of brain regions. While these variations might seem small, they potentially have a huge impact on conclusions about functional regions based on traditional analysis methods. This means that the same bit of anatomy (like, say, the posterior portion of a gyrus) could be doing different things in different brains.

“In both investigations of patients with brain damage and much of brain imaging work, a lot of confusion has stemmed from the use of macroanatomical areas (like the inferior frontal gyrus (IFG)) as ‘units of analysis’,” explains Fedorenko. “When some researchers found IFG activation for a syntactic manipulation, and others for a working memory manipulation, the field jumped to the conclusion that syntactic processing relies on working memory. But these effects might actually be arising in totally distinct parts of the IFG.”

The only way to circumvent this problem is to turn to functional data and aggregate information from functionally defined areas across individuals. Using this approach, across four lines of evidence from the last decade, Fedorenko and Blank came to a clear conclusion: Broca’s area is not a monolithic region with a single function, but contains distinct areas, one dedicated to language processing, and another that supports domain-general functions like working memory.

“We just have to stop referring to macroanatomical brain regions (like gyri and sulci, or their parts) when talking about the functional architecture of the brain,” explains Fedorenko. “I am delighted to see that more and more labs across the world are recognizing the inter-individual variability that characterizes the human brain– this shift is putting us on the right path to making fundamental discoveries about how our brain works.”

Indeed, accounting for distinct functional regions, within Broca’s area and elsewhere, seems essential going forward if we are to truly understand the complexity of the human brain.

Embracing neurodiversity to better understand autism

Researchers often approach autism spectrum disorder (ASD) through the lens of what might “break down.” While this approach has value, autism is an extremely heterogeneous condition, and diagnosed individuals have a broad range of abilities.

The Gabrieli lab is embracing this diversity and leveraging the strengths of diagnosed individuals by researching their specific “affinities.”

Affinities involve a strong passion for specific topics, ranging from insects to video game characters, and can include impressive feats of knowledge and focus.

The biological basis of these affinities and associated abilities remains unclear, which is intriguing to John Gabrieli and his lab.

“A striking aspect of autism is the great variation from individual to individual,” explains McGovern Investigator John Gabrieli. “Understanding what motivates an individual child may inform how to best help that child reach his or her communicative potential.”

Doug Tan is an artist on the autism spectrum who has a particular interest in Herbie, the fictional Volkswagen Beetle. Nearly all of Tan’s works include a visual reference to his “affinity” (shown here in black). Image: Doug Tan

Affinities have traditionally been seen as a distraction “interfering” with conventional teaching and learning. This mindset was upended by the 2014 book Life Animated by Ron Suskind, whose autistic son Owen seemingly lost his ability to speak around age three. Despite this setback, Owen maintained a deep affinity for Disney movies and characters. Rather than extinguishing this passion, the Suskinds embraced it as a path to connection.

Reframing such affinities as a strength not a frustration, and a path to communication rather than a roadblock, caught the attention of Kristy Johnson, a PhD student at the MIT Media Lab, who also has a non-verbal child with autism.

“My interest is in empowering and understanding populations that have traditionally been hard to study, including those with non-verbal and minimally verbal autism,” explains Johnson. “One way to do that is through affinities.”

But even identifying affinities is difficult. An interest in “trains” might mean 18th-century smokestacks to one child, and the purple line of the MBTA commuter rail to another. Serendipitously, she mentioned her interest to Gabrieli one day. He slammed his hands on the table, jumped up, and ran to find lab members Anila D’Mello and Halie Olson, who were gearing up to pursue the neural basis of affinities in autism. A collaboration was born.

Scientific challenge

What followed was six months of intense discussion. How can an affinity be accurately defined? How can individually tailored experiments be adequately controlled? What makes a robust comparison group? How can task-related performance differences between individuals with autism be accounted for?

The handful of studies that had used fMRI neuroimaging to examine affinities in autism had focused on the brain’s reward circuitry. D’Mello and Olson wanted to examine the language network of the brain — a well-defined network of brain regions whose activation can be measured by fMRI. Affinities trigger communication in some individuals with autism (Suskind’s family were using Disney characters to engage and communicate, not simply as a reward). Was the language network being engaged by affinities? Could these results point to a way of tailoring learning for all types of development?

“The language network involves lots of regions across the brain, including temporal, parietal, frontal, and subcortical areas, which play specific roles in different aspects of language processing” explains Olson. “We were interested in a task that used affinities to tap the language network.”

fMRI reveals regions of the brain that show increased activity for stories related to affinities versus neutral stories; these include regions important for language processing. Image: Anila D’Mello

By studying this network, the team is testing whether affinities can elicit “typical” activation in regions of the brain that are sometimes assumed to not be engaged in autism. The approach may help develop better paradigms for studying other tasks with individuals with autism. Regardless of whether there are differences between the group diagnosed with autism and typically developing children, insight will likely be gained into how personalized special interests influence engagement of the language network.

The resulting study is task-free, removing the variable of differing motor or cognitive skill sets. Kids watch videos of their individual affinity in the fMRI scanner, and then listen to stories based on that affinity. They also watch and listen to “neutral” videos and stories about nature that are consistent across all children. Identifying affinities robustly so that the right stimulus can be presented is critical. Rather than an interest in bugs, affinities are often very specific (bugs that eat other bugs). But identifying and cross-checking affinities is something the group is becoming adept at. The results are emerging, but the effects that the team are seeing are significant, and preliminary data suggest that affinities engage networks beyond reward circuits.

“We have a small sample right now, but across the sample, there seems to be a difference in activation in the brain’s language network when listening to affinity stories compared to neutral stories,” explains D’Mello. “The biggest surprise is that the differences are evident in single subjects.”

Future forward

The work is already raising exciting new questions. Are there other brain regions engaged by affinities? How would such information inform education and intervention paradigms? In addition, the team is showing it’s possible to derive information from individualized, naturalistic experimental paradigms, a message for brain imaging and behavioral studies in general. The researchers also hope the results inspire parents, teachers, and psychologists to perceive and engage with an individual’s affinities in new ways.

“This could really help teach us to communicate with and motivate very young and non-verbal kids on the spectrum in a way that is interesting and meaningful to them,” D’Mello explains.

By studying the strengths of individuals with autism, these researchers are showing that, through embracing neurodiversity, we can enhance science, our understanding of the brain, and perhaps even our understanding of ourselves.

Learn about autism studies at MIT

Nancy Kanwisher to receive George A. Miller Prize in Cognitive Neuroscience

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named this year’s winner of the George A. Miller Prize in Cognitive Neuroscience. The award, given annually by the Cognitive Neuroscience Society (CNS), recognizes individuals “whose distinguished research is at the cutting-edge of their discipline with realized or future potential, to revolutionize cognitive neuroscience.”

Kanwisher studies the functional organization of the human mind and, over the last 20 years, her lab has played a central role in the identification of several dozen regions of the cortex in humans that are engaged in particular components of perception and cognition. She is perhaps best known for identifying brain regions specialized for recognizing faces.

Kanwisher will deliver her prize lecture, “Functional imaging of the human brain: A window into the architecture of the mind” at the 2020 CNS annual meeting in Boston this March.

Brain biomarkers predict mood and attention symptoms

Mood and attentional disorders amongst teens are an increasing concern, for parents, society, and for peers. A recent Pew research center survey found conditions such as depression and anxiety to be the number one concern that young students had about their friends, ranking above drugs or bullying.

“We’re seeing an epidemic in teen anxiety and depression,” explains McGovern Research Affiliate Susan Whitfield-Gabrieli.

“Scientists are finding a huge increase in suicide ideation and attempts, something that hit home for me as a mother of teens. Emergency rooms in hospitals now have guards posted outside doors of these teenagers that attempted suicide—this is a pressing issue,” explains Whitfield-Gabrieli who is also director of the Northeastern University Biomedical Imaging Center and a member of the Poitras Center for Psychiatric Disorders Research.

Finding new methods for discovering early biomarkers for risk of psychiatric disorders would allow early interventions and avoid reaching points of crisis such as suicide ideation or attempts. In research published recently in JAMA Psychiatry, Whitfield-Gabrieli and colleagues found that signatures predicting future development of depression and attentional symptoms can be detected in children as young as seven years old.

Long-term view

While previous work had suggested that there may be biomarkers that predict development of mood and attentional disorders, identifying early biomarkers prior to an onset of illness requires following a cohort of pre-teens from a young age, and monitoring them across years. This effort to have a proactive, rather than reactive, approach to the development of symptoms associated with mental disorders is exactly the route Whitfield-Gabrieli and colleagues took.

“One of the exciting aspects of this study is that the cohort is not pre-selected for already having symptoms of psychiatric disorders themselves or even in their family,” explained Whitfield-Gabrieli. “It’s an unbiased cohort that we followed over time.”

McGovern research affiliate Susan Whitfield-Gabrieli has discovered early brain biomarkers linked to psychiatric disorders.

In some past studies, children were pre-selected, for example a major depressive disorder diagnosis in the parents, but Whitfield-Gabrieli and colleagues, Silvia Bunge from Berkeley and Laurie Cutting from Vanderbilt, recruited a range of children without preconditions, and examined them at age 7, then again 4 years later. The researchers examined resting state functional connectivity, and compared this to scores on the child behavioral checklist (CBCL), allowing them to relate differences in the brain to a standardized analysis of behavior that can be linked to psychiatric disorders. The CBCL is used both in research and in the clinic and his highly predictive of disorders including ADHD, so that changes in the brain could be related to changes in a widely used clinical scoring system.

“Over the four years, some people got worse, some got better, and some stayed the same according the CBCL. We could relate this directly to differences in brain networks, and could identify at age 7 who would get worse,” explained Whitfield-Gabrieli.

Brain network changes

The authors analyzed differences in resting state network connectivity, regions across the brain that rise and fall in activity level together, as visualized using fMRI. Reduced connectivity between these regions may allow us to get a handle on reduced “top-down” control of neural circuits. The dorsolateral prefrontal region is linked to executive function, external attention, and emotional control. Increased connection with the medial prefrontal cortex is known to be present in attention deficit hyperactivity disorder (ADHD), while a reduced connection to a different brain region, the sgACC, is seen in major depressive disorder. The question remained as to whether these changes can be seen prior to the onset of diagnosable attentional or mood disorders.

Whitfield-Gabrieli and colleagues found that these resting state networks varied in the brains of children that would later develop anxiety/depression and ADHD symptoms. Weaker scores in connectivity between the dorsolateral and medial prefrontal cortical regions tended to be seen in children whose attention scores went on to improve. Analysis of the resting state networks above could differentiate those who would have typical attentional behavior by age 11 versus those that went on to develop ADHD.

Whitfield-Gabrieli has replicated this finding in an independent sample of children and she is continuing to expand the analysis and check the results, as well as follow this cohort into the future. Should changes in resting state networks be a consistent biomarker, the next step is to initiate interventions prior to the point of crisis.

“We’ve recently been able to use mindfulness interventions, and show these reduce self-perceived stress and amygdala activation in response to fear, and we are also testing the effect of exercise interventions,” explained Whitfield-Gabrieli. “The hope is that by using predictive biomarkers we can augment children’s lifestyles with healthy interventions that can prevent risk converting to a psychiatric disorder.”

Can fMRI reveal insights into addiction and treatments?

Many debilitating conditions like depression and addiction have biological signatures hidden in the brain well before symptoms appear.  What if brain scans could be used to detect these hidden signatures and determine the most optimal treatment for each individual? McGovern Investigator John Gabrieli is interested in this question and wrote about the use of imaging technologies as a predictive tool for brain disorders in a recent issue of Scientific American.

page from Scientific American article
McGovern Investigator John Gabrieli pens a story for Scientific American about the potential for brain imaging to predict the onset of mental illness.

“Brain scans show promise in predicting who will benefit from a given therapy,” says Gabrieli, who is also the Grover Hermann Professor in Brain and Cognitive Sciences at MIT. “Differences in neural activity may one day tell clinicians which depression treatment will be most effective for an individual or which abstinent alcoholics will relapse.”

Gabrieli cites research which has shown that half of patients treated for alcohol abuse go back to drinking within a year of treatment, and similar reversion rates occur for stimulants such as cocaine. Failed treatments may be a source of further anxiety and stress, Gabrieli notes, so any information we can glean from the brain to pinpoint treatments or doses that would help would be highly informative.

Current treatments rely on little scientific evidence to support the length of time needed in a rehabilitation facility, he says, but “a number suggest that brain measures might foresee who will succeed in abstaining after treatment has ended.”

Further data is needed to support this idea, but Gabrieli’s Scientific American piece makes the case that the use of such a technology may be promising for a range of addiction treatments including abuse of alcohol, nicotine, and illicit drugs.

Gabrieli also believes brain imaging has the potential to reshape education. For example, educational interventions targeting dyslexia might be more effective if personalized to specific differences in the brain that point to the source of the learning gap.

But for the prediction sciences to move forward in mental health and education, he concludes, the research community must design further rigorous studies to examine these important questions.

Controlling attention with brain waves

Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.

The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.

Desimone lab study shows that people can boost attention by manipulating their own alpha brain waves with neurofeedback training.

“There’s a lot of interest in using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”

It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.

Desimone is the senior author of the paper, which appears in Neuron on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.

Alpha and attention

There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.

Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.

To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.

During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.

McGovern postdoc Yasaman sits in a magnetoencephalography (MEG) scanner. Photo: Justin Knight

Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.

“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain activity, but when we provide it to them and reward them, they learn by practicing.”

Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.

One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.

“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.

Persistent effect

After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.

When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.

In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.

“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.

The research was funded by the McGovern Institute.

Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.