National Academy of Sciences honors cognitive neuroscientist Nancy Kanwisher

MIT neuroscientist and McGovern Investigator Nancy Kanwisher. Photo: Jussi Puikkonen/KNAW

The National Academy of Sciences (NAS) has announced today that Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience in MIT’s Department of Brain and Cognitive Sciences, has received the 2022 NAS Award in the Neurosciences for her “pioneering research into the functional organization of the human brain.” The $25,000 prize, established by the Fidia Research Foundation, is presented every three years to recognize “extraordinary contributions to the neuroscience fields.”

“I am deeply honored to receive this award from the NAS,” says Kanwisher, who is also an investigator in MIT’s McGovern Institute and a member of the Center for Brains, Minds and Machines. “It has been a profound privilege, and a total blast, to watch the human brain in action as these data began to reveal an initial picture of the organization of the human mind. But the biggest joy has been the opportunity to work with the incredible group of talented young scientists who actually did the work that this award recognizes.”

A window into the mind

Kanwisher is best known for her landmark insights into how humans recognize and process faces. Psychology had long-suggested that recognizing a face might be distinct from general object recognition. But Kanwisher galvanized the field in 1997 with her seminal discovery that the human brain contains a small region specialized to respond only to faces. The region, which Kanwisher termed the fusiform face area (FFA), became activated when subjects viewed images of faces in an MRI scanner, but not when they looked at scrambled faces or control stimuli.

Since her 1997 discovery (now the most highly cited manuscript in its area), Kanwisher and her students have applied similar methods to find brain specializations for the recognition of scenes, the mental states of others, language, and music. Taken together, her research provides a compelling glimpse into the architecture of the brain, and, ultimately, what makes us human.

“Nancy’s work over the past two decades has argued that many aspects of human cognition are supported by specialized neural circuitry, a conclusion that stands in contrast to our subjective sense of a singular mental experience,” says McGovern Institute Director Robert Desimone. “She has made profound contributions to the psychological and cognitive sciences and I am delighted that the National Academy of Sciences has recognized her outstanding achievements.”

One-in-a-million mentor

Beyond the lab, Kanwisher has a reputation as a tireless communicator and mentor who is actively engaged in the policy implications of brain research. The statistics speak for themselves: her 2014 TED talk, “A Neural portrait of the human mind” has been viewed over a million times online and her introductory MIT OCW course on the human brain has generated more than nine million views on YouTube.

Nancy Kanwisher works with researchers from her lab in MIT’s Martinos Imaging Center. Photo: Kris Brewer

Kanwisher also has an exceptional track record in training women scientists who have gone on to successful independent research careers, in many cases becoming prominent figures in their own right.

“Nancy is the one-in-a-million mentor, who is always skeptical of your ideas and your arguments, but immensely confident of your worth,” says Rebecca Saxe, John W. Jarve (1978) Professor of Brain and Cognitive Sciences, investigator at the McGovern Institute, and associate dean of MIT’s School of Science. Saxe was a graduate student in Kanwisher’s lab where she earned her PhD in cognitive neuroscience in 2003. “She has such authentic curiosity,” Saxe adds. “It’s infectious and sustaining. Working with Nancy was a constant reminder of why I wanted to be a scientist.”

The NAS will present Kanwisher with the award during its annual meeting on May 1, 2022 in Washington, DC. The event will be webcast live. Kanwisher plans to direct her prize funds to the non-profit organization Malengo, established by a former student and which provides quality undergraduate education to individuals who would otherwise not be able to afford it.

A key brain region responds to faces similarly in infants and adults

Within the visual cortex of the adult brain, a small region is specialized to respond to faces, while nearby regions show strong preferences for bodies or for scenes such as landscapes.

Neuroscientists have long hypothesized that it takes many years of visual experience for these areas to develop in children. However, a new MIT study suggests that these regions form much earlier than previously thought. In a study of babies ranging in age from two to nine months, the researchers identified areas of the infant visual cortex that already show strong preferences for either faces, bodies, or scenes, just as they do in adults.

“These data push our picture of development, making babies’ brains look more similar to adults, in more ways, and earlier than we thought,” says Rebecca Saxe, the John W. Jarve Professor of Brain and Cognitive Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

Using functional magnetic resonance imaging (fMRI), the researchers collected usable data from more than 50 infants, a far greater number than any research lab has been able to scan before. This allowed them to examine the infant visual cortex in a way that had not been possible until now.

“This is a result that’s going to make a lot of people have to really grapple with their understanding of the infant brain, the starting point of development, and development itself,” says Heather Kosakowski, an MIT graduate student and the lead author of the study, which appears today in Current Biology.

MIT graduate student Heather Kosakowski prepares an infant for an MRI scan at the Martinos Imaging Center. Photo: Caitlin Cunningham

Distinctive regions

More than 20 years ago, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, used fMRI to discover the fusiform face area: a small region of the visual cortex that responds much more strongly to faces than any other kind of visual input.

Since then, Kanwisher and her colleagues have also identified parts of the visual cortex that respond to bodies (the extrastriate body area, or EBA), and scenes (the parahippocampal place area, or PPA).

“There is this set of functionally very distinctive regions that are present in more or less the same place in pretty much every adult,” says Kanwisher, who is also a member of MIT’s Center for Brains, Minds, and Machines, and an author of the new study. “That raises all these questions about how these regions develop. How do they get there, and how do you build a brain that has such similar structure in each person?”

One way to try to answer those questions is to investigate when these highly selective regions first develop in the brain. A longstanding hypothesis is that it takes several years of visual experience for these regions to gradually become selective for their specific targets. Scientists who study the visual cortex have found similar selectivity patterns in children as young as 4 or 5 years old, but there have been few studies of children younger than that.

In 2017, Saxe and one of her graduate students, Ben Deen, reported the first successful use of fMRI to study the brains of awake infants. That study, which included data from nine babies, suggested that while infants did have areas that respond to faces and scenes, those regions were not yet highly selective. For example, the fusiform face area did not show a strong preference for human faces over every other kind of input, including human bodies or the faces of other animals.

However, that study was limited by the small number of subjects, and also by its reliance on an fMRI coil that the researchers had developed especially for babies, which did not offer as high-resolution imaging as the coils used for adults.

For the new study, the researchers wanted to try to get better data, from more babies. They built a new scanner that is more comfortable for babies and also more powerful, with resolution similar to that of fMRI scanners used to study the adult brain.

After going into the specialized scanner, along with a parent, the babies watched videos that showed either faces, body parts such as kicking feet or waving hands, objects such as toys, or natural scenes such as mountains.

The researchers recruited nearly 90 babies for the study, collected usable fMRI data from 52, half of which contributed higher-resolution data collected using the new coil. Their analysis revealed that specific regions of the infant visual cortex show highly selective responses to faces, body parts, and natural scenes, in the same locations where those responses are seen in the adult brain. The selectivity for natural scenes, however, was not as strong as for faces or body parts.

The infant brain

The findings suggest that scientists’ conception of how the infant brain develops may need to be revised to accommodate the observation that these specialized regions start to resemble those of adults sooner than anyone had expected.

“The thing that is so exciting about these data is that they revolutionize the way we understand the infant brain,” Kosakowski says. “A lot of theories have grown up in the field of visual neuroscience to accommodate the view that you need years of development for these specialized regions to emerge. And what we’re saying is actually, no, you only really need a couple of months.”

Because their data on the area of the brain that responds to scenes was not as strong as for the other locations they looked at, the researchers now plan to pursue additional studies of that region, this time showing babies images on a much larger screen that will more closely mimic the experience of being within a scene. For that study, they plan to use near-infrared spectroscopy (NIRS), a non-invasive imaging technique that doesn’t require the participant to be inside a scanner.

“That will let us ask whether young babies have robust responses to visual scenes that we underestimated in this study because of the visual constraints of the experimental setup in the scanner,” Saxe says.

The researchers are now further analyzing the data they gathered for this study in hopes of learning more about how development of the fusiform face area progresses from the youngest babies they studied to the oldest. They also hope to perform new experiments examining other aspects of cognition, including how babies’ brains respond to language and music.

The research was funded by the National Science Foundation, the National Institutes of Health, the McGovern Institute, and the Center for Brains, Minds, and Machines.

Artificial intelligence sheds light on how the brain processes language

In the past few years, artificial intelligence models of language have become very good at certain tasks. Most notably, they excel at predicting the next word in a string of text; this technology helps search engines and texting apps predict the next word you are going to type.

The most recent generation of predictive language models also appears to learn something about the underlying meaning of language. These models can not only predict the word that comes next, but also perform tasks that seem to require some degree of genuine understanding, such as question answering, document summarization, and story completion.

Such models were designed to optimize performance for the specific function of predicting text, without attempting to mimic anything about how the human brain performs this task or understands language. But a new study from MIT neuroscientists suggests the underlying function of these models resembles the function of language-processing centers in the human brain.

Computer models that perform well on other types of language tasks do not show this similarity to the human brain, offering evidence that the human brain may use next-word prediction to drive language processing.

“The better the model is at predicting the next word, the more closely it fits the human brain,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines (CBMM), and an author of the new study. “It’s amazing that the models fit so well, and it very indirectly suggests that maybe what the human language system is doing is predicting what’s going to happen next.”

Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of CBMM and MIT’s Artificial Intelligence Laboratory (CSAIL); and Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience and a member of the McGovern Institute, are the senior authors of the study, which appears this week in the Proceedings of the National Academy of Sciences.

Martin Schrimpf, an MIT graduate student who works in CBMM, is the first author of the paper.

Making predictions

The new, high-performing next-word prediction models belong to a class of models called deep neural networks. These networks contain computational “nodes” that form connections of varying strength, and layers that pass information between each other in prescribed ways.

Over the past decade, scientists have used deep neural networks to create models of vision that can recognize objects as well as the primate brain does. Research at MIT has also shown that the underlying function of visual object recognition models matches the organization of the primate visual cortex, even though those computer models were not specifically designed to mimic the brain.

In the new study, the MIT team used a similar approach to compare language-processing centers in the human brain with language-processing models. The researchers analyzed 43 different language models, including several that are optimized for next-word prediction. These include a model called GPT-3 (Generative Pre-trained Transformer 3), which, given a prompt, can generate text similar to what a human would produce. Other models were designed to perform different language tasks, such as filling in a blank in a sentence.

As each model was presented with a string of words, the researchers measured the activity of the nodes that make up the network. They then compared these patterns to activity in the human brain, measured in subjects performing three language tasks: listening to stories, reading sentences one at a time, and reading sentences in which one word is revealed at a time. These human datasets included functional magnetic resonance (fMRI) data and intracranial electrocorticographic measurements taken in people undergoing brain surgery for epilepsy.

They found that the best-performing next-word prediction models had activity patterns that very closely resembled those seen in the human brain. Activity in those same models was also highly correlated with measures of human behavioral measures such as how fast people were able to read the text.

“We found that the models that predict the neural responses well also tend to best predict human behavior responses, in the form of reading times. And then both of these are explained by the model performance on next-word prediction. This triangle really connects everything together,” Schrimpf says.

“A key takeaway from this work is that language processing is a highly constrained problem: The best solutions to it that AI engineers have created end up being similar, as this paper shows, to the solutions found by the evolutionary process that created the human brain. Since the AI network didn’t seek to mimic the brain directly — but does end up looking brain-like — this suggests that, in a sense, a kind of convergent evolution has occurred between AI and nature,” says Daniel Yamins, an assistant professor of psychology and computer science at Stanford University, who was not involved in the study.

Game changer

One of the key computational features of predictive models such as GPT-3 is an element known as a forward one-way predictive transformer. This kind of transformer is able to make predictions of what is going to come next, based on previous sequences. A significant feature of this transformer is that it can make predictions based on a very long prior context (hundreds of words), not just the last few words.

Scientists have not found any brain circuits or learning mechanisms that correspond to this type of processing, Tenenbaum says. However, the new findings are consistent with hypotheses that have been previously proposed that prediction is one of the key functions in language processing, he says.

“One of the challenges of language processing is the real-time aspect of it,” he says. “Language comes in, and you have to keep up with it and be able to make sense of it in real time.”

The researchers now plan to build variants of these language processing models to see how small changes in their architecture affect their performance and their ability to fit human neural data.

“For me, this result has been a game changer,” Fedorenko says. “It’s totally transforming my research program, because I would not have predicted that in my lifetime we would get to these computationally explicit models that capture enough about the brain so that we can actually leverage them in understanding how the brain works.”

The researchers also plan to try to combine these high-performing language models with some computer models Tenenbaum’s lab has previously developed that can perform other kinds of tasks such as constructing perceptual representations of the physical world.

“If we’re able to understand what these language models do and how they can connect to models which do things that are more like perceiving and thinking, then that can give us more integrative models of how things work in the brain,” Tenenbaum says. “This could take us toward better artificial intelligence models, as well as giving us better models of how more of the brain works and how general intelligence emerges, than we’ve had in the past.”

The research was funded by a Takeda Fellowship; the MIT Shoemaker Fellowship; the Semiconductor Research Corporation; the MIT Media Lab Consortia; the MIT Singleton Fellowship; the MIT Presidential Graduate Fellowship; the Friends of the McGovern Institute Fellowship; the MIT Center for Brains, Minds, and Machines, through the National Science Foundation; the National Institutes of Health; MIT’s Department of Brain and Cognitive Sciences; and the McGovern Institute.

Other authors of the paper are Idan Blank PhD ’16 and graduate students Greta Tuckute, Carina Kauf, and Eghbal Hosseini.

Storytelling brings MIT neuroscience community together

When the coronavirus pandemic shut down offices, labs, and classrooms across the MIT campus last spring, many members of the MIT community found it challenging to remain connected to one another in meaningful ways. Motivated by a desire to bring the neuroscience community back together, the McGovern Institute hosted a virtual storytelling competition featuring a selection of postdocs, grad students, and staff from across the institute.

“This has been an unprecedented year for us all,” says McGovern Institute Director Robert Desimone. “It has been twenty years since Pat and Lore McGovern founded the McGovern Institute, and despite the challenges this anniversary year has brought to our community, I have been inspired by the strength and perseverance demonstrated by our faculty, postdocs, students and staff. The resilience of this neuroscience community – and MIT as a whole – is indeed something to celebrate.”

The McGovern Institute had initially planned to hold a large 20th anniversary celebration in the atrium of Building 46 in the fall of 2020, but the pandemic made a gathering of this size impossible. The institute instead held a series of virtual events, including the November 12 story slam on the theme of resilience.

Face-specific brain area responds to faces even in people born blind

More than 20 years ago, neuroscientist Nancy Kanwisher and others discovered that a small section of the brain located near the base of the skull responds much more strongly to faces than to other objects we see. This area, known as the fusiform face area, is believed to be specialized for identifying faces.

Now, in a surprising new finding, Kanwisher and her colleagues have shown that this same region also becomes active in people who have been blind since birth, when they touch a three-dimensional model of a face with their hands. The finding suggests that this area does not require visual experience to develop a preference for faces.

“That doesn’t mean that visual input doesn’t play a role in sighted subjects — it probably does,” she says. “What we showed here is that visual input is not necessary to develop this particular patch, in the same location, with the same selectivity for faces. That was pretty astonishing.”

Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research, is the senior author of the study. N. Apurva Ratan Murty, an MIT postdoc, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Other authors of the paper include Santani Teng, a former MIT postdoc; Aude Oliva, a senior research scientist, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab; and David Beeler and Anna Mynick, both former lab technicians.

Selective for faces

Studying people who were born blind allowed the researchers to tackle longstanding questions regarding how specialization arises in the brain. In this case, they were specifically investigating face perception, but the same unanswered questions apply to many other aspects of human cognition, Kanwisher says.

“This is part of a broader question that scientists and philosophers have been asking themselves for hundreds of years, about where the structure of the mind and brain comes from,” she says. “To what extent are we products of experience, and to what extent do we have built-in structure? This is a version of that question asking about the particular role of visual experience in constructing the face area.”

The new work builds on a 2017 study from researchers in Belgium. In that study, congenitally blind subjects were scanned with functional magnetic resonance imaging (fMRI) as they listened to a variety of sounds, some related to faces (such as laughing or chewing), and others not. That study found higher responses in the vicinity of the FFA to face-related sounds than to sounds such as a ball bouncing or hands clapping.

In the new study, the MIT team wanted to use tactile experience to measure more directly how the brains of blind people respond to faces. They created a ring of 3D-printed objects that included faces, hands, chairs, and mazes, and rotated them so that the subject could handle each one while in the fMRI scanner.

They began with normally sighted subjects and found that when they handled the 3D objects, a small area that corresponded to the location of the FFA was preferentially active when the subjects touched the faces, compared to when they touched other objects. This activity, which was weaker than the signal produced when sighted subjects looked at faces, was not surprising to see, Kanwisher says.

“We know that people engage in visual imagery, and we know from prior studies that visual imagery can activate the FFA. So the fact that you see the response with touch in a sighted person is not shocking because they’re visually imagining what they’re feeling,” she says.

The researchers then performed the same experiments, using tactile input only, with 15 subjects who reported being blind since birth. To their surprise, they found that the brain showed face-specific activity in the same area as the sighted subjects, at levels similar to when sighted people handled the 3D-printed faces.

“When we saw it in the first few subjects, it was really shocking, because no one had seen individual face-specific activations in the fusiform gyrus in blind subjects previously,” Murty says.

Patterns of connection

The researchers also explored several hypotheses that have been put forward to explain why face-selectivity always seems to develop in the same region of the brain. One prominent hypothesis suggests that the FFA develops face-selectivity because it receives visual input from the fovea (the center of the retina), and we tend to focus on faces at the center of our visual field. However, since this region developed in blind people with no foveal input, the new findings do not support this idea.

Another hypothesis is that the FFA has a natural preference for curved shapes. To test that idea, the researchers performed another set of experiments in which they asked the blind subjects to handle a variety of 3D-printed shapes, including cubes, spheres, and eggs. They found that the FFA did not show any preference for the curved objects over the cube-shaped objects.

The researchers did find evidence for a third hypothesis, which is that face selectivity arises in the FFA because of its connections to other parts of the brain. They were able to measure the FFA’s “connectivity fingerprint” — a measure of the correlation between activity in the FFA and activity in other parts of the brain — in both blind and sighted subjects.

They then used the data from each group to train a computer model to predict the exact location of the brain’s selective response to faces based on the FFA connectivity fingerprint. They found that when the model was trained on data from sighted patients, it could accurately predict the results in blind subjects, and vice versa. They also found evidence that connections to the frontal and parietal lobes of the brain, which are involved in high-level processing of sensory information, may be the most important in determining the role of the FFA.

“It’s suggestive of this very interesting story that the brain wires itself up in development not just by taking perceptual information and doing statistics on the input and allocating patches of brain, according to some kind of broadly agnostic statistical procedure,” Kanwisher says. “Rather, there are endogenous constraints in the brain present at birth, in this case, in the form of connections to higher-level brain regions, and these connections are perhaps playing a causal role in its development.”

The research was funded by the National Institutes of Health Shared Instrumentation Grant to the Athinoula Martinos Center at MIT, a National Eye Institute Training Grant, the Smith-Kettlewell Eye Research Institute’s Rehabilitation Engineering Research Center, an Office of Naval Research Vannevar Bush Faculty Fellowship, an NIH Pioneer Award, and a National Science Foundation Science and Technology Center Grant.

Full paper at PNAS

Learning from social isolation

“Livia Tomova, a postdoc in the Saxe Lab, recently completed a study about social isolation and its impact on the brain. Michelle Hung and I had a lot of exposure to her research in the lab. When “social distancing” measures hit MIT, we tried to process how the implementation of these policies would impact the landscape of our social lives.

We came up with some hypotheses and agreed that the coronavirus pandemic would fundamentally change life as we know it.

So we developed a survey to measure how the social behavior of MIT students, postdocs, and staff changes over the course of the pandemic. Our study is still in its very early stages, but it has been an incredibly fulfilling experience to be a part of Michelle’s development as a scientist.

Heather Kosakowski’s daughter in Woods Hole, Massachusetts. Photo: Heather Kosakowski

After the undergraduates left, graduate students were also strongly urged to leave graduate student housing. My daughter (age 11) and I live in a 28th-floor apartment and her school was canceled. One of my advisors, Nancy Kanwisher, had a vacant apartment in Woods Hole that she offered to let lab members stay in. As more and more resources for children were being closed or shut down, I decided to take her up on the offer. Wood’s Hole is my daughter’s absolute favorite place and I feel extremely lucky to have such a generous option. My daughter has been coping really well with all of these changes.

While my research is at an exciting stage, I miss being on campus with the students from my cohort and my lab mates and my weekly in-person meetings with my advisors. One way I’ve been coping with this reality is by listening to stories of other people’s experiences. We are all human and we are all in the midst of a pandemic but, we are all experiencing the pandemic in different ways. I find the diversity of our experience intriguing. I have been fortunate to have friends write stories about their experiences, so that I can post them on my blog. I only have a handful of stories right now but, it has been really fun for me to listen, and humbling for me to share each individual’s unique experience.”

Heather Kosakowski is a graduate student in the labs of Rebecca Saxe and Nancy Kanwisher where she studies the infant brain and the developmental origins of object recognition, language, and music. Heather is also a Marine Corps veteran and single mom who manages a blog that “ties together different aspects of my experience, past and present, with the hopes that it might make someone else out there feel less alone.”


Nancy Kanwisher to receive George A. Miller Prize in Cognitive Neuroscience

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named this year’s winner of the George A. Miller Prize in Cognitive Neuroscience. The award, given annually by the Cognitive Neuroscience Society (CNS), recognizes individuals “whose distinguished research is at the cutting-edge of their discipline with realized or future potential, to revolutionize cognitive neuroscience.”

Kanwisher studies the functional organization of the human mind and, over the last 20 years, her lab has played a central role in the identification of several dozen regions of the cortex in humans that are engaged in particular components of perception and cognition. She is perhaps best known for identifying brain regions specialized for recognizing faces.

Kanwisher will deliver her prize lecture, “Functional imaging of the human brain: A window into the architecture of the mind” at the 2020 CNS annual meeting in Boston this March.

Our brains appear uniquely tuned for musical pitch

In the eternal search for understanding what makes us human, scientists found that our brains are more sensitive to pitch, the harmonic sounds we hear when listening to music, than our evolutionary relative the macaque monkey. The study, funded in part by the National Institutes of Health, highlights the promise of Sound Health, a joint project between the NIH and the John F. Kennedy Center for the Performing Arts, in association with the National Endowment for the Arts, that aims to understand the role of music in health.

“We found that a certain region of our brains has a stronger preference for sounds with pitch than macaque monkey brains,” said Bevil Conway, Ph.D., investigator in the NIH’s Intramural Research Program and a senior author of the study published in Nature Neuroscience. “The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain.”

The study started with a friendly bet between Dr. Conway and Sam Norman-Haignere, Ph.D., a post-doctoral fellow at Columbia University’s Zuckerman Institute for Mind, Brain, and Behavior and the first author of the paper.

At the time, both were working at the Massachusetts Institute of Technology (MIT). Dr. Conway’s team had been searching for differences between how human and monkey brains control vision only to discover that there are very few. Their brain mapping studies suggested that humans and monkeys see the world in very similar ways. But then, Dr. Conway heard about some studies on hearing being done by Dr. Norman-Haignere, who, at the time, was a post-doctoral fellow in the laboratory of Josh H. McDermott, Ph.D., associate professor at MIT.

“I told Bevil that we had a method for reliably identifying a region in the human brain that selectively responds to sounds with pitch,” said Dr. Norman-Haignere, That is when they got the idea to compare humans with monkeys. Based on his studies, Dr. Conway bet that they would see no differences.

To test this, the researchers played a series of harmonic sounds, or tones, to healthy volunteers and monkeys. Meanwhile, functional magnetic resonance imaging (fMRI) was used to monitor brain activity in response to the sounds. The researchers also monitored brain activity in response to sounds of toneless noises that were designed to match the frequency levels of each tone played.

At first glance, the scans looked similar and confirmed previous studies. Maps of the auditory cortex of human and monkey brains had similar hot spots of activity regardless of whether the sounds contained tones.

However, when the researchers looked more closely at the data, they found evidence suggesting the human brain was highly sensitive to tones. The human auditory cortex was much more responsive than the monkey cortex when they looked at the relative activity between tones and equivalent noisy sounds.

“We found that human and monkey brains had very similar responses to sounds in any given frequency range. It’s when we added tonal structure to the sounds that some of these same regions of the human brain became more responsive,” said Dr. Conway. “These results suggest the macaque monkey may experience music and other sounds differently. In contrast, the macaque’s experience of the visual world is probably very similar to our own. It makes one wonder what kind of sounds our evolutionary ancestors experienced.”

Further experiments supported these results. Slightly raising the volume of the tonal sounds had little effect on the tone sensitivity observed in the brains of two monkeys.

Finally, the researchers saw similar results when they used sounds that contained more natural harmonies for monkeys by playing recordings of macaque calls. Brain scans showed that the human auditory cortex was much more responsive than the monkey cortex when they compared relative activity between the calls and toneless, noisy versions of the calls.

“This finding suggests that speech and music may have fundamentally changed the way our brain processes pitch,” said Dr. Conway. “It may also help explain why it has been so hard for scientists to train monkeys to perform auditory tasks that humans find relatively effortless.”

Earlier this year, other scientists from around the U.S. applied for the first round of NIH Sound Health research grants. Some of these grants may eventually support scientists who plan to explore how music turns on the circuitry of the auditory cortex that make our brains sensitive to musical pitch.

This study was supported by the NINDS, NEI, NIMH, and NIA Intramural Research Programs and grants from the NIH (EY13455; EY023322; EB015896; RR021110), the National Science Foundation (Grant 1353571; CCF-1231216), the McDonnell Foundation, the Howard Hughes Medical Institute.

Can we think without language?

As part of our Ask the Brain series, Anna Ivanova, a graduate student who studies how the brain processes language in the labs of Nancy Kanwisher and Evelina Fedorenko, answers the question, “Can we think without language?”

Anna Ivanova headshot
Graduate student Anna Ivanova studies language processing in the brain.


Imagine a woman – let’s call her Sue. One day Sue gets a stroke that destroys large areas of brain tissue within her left hemisphere. As a result, she develops a condition known as global aphasia, meaning she can no longer produce or understand phrases and sentences. The question is: to what extent are Sue’s thinking abilities preserved?

Many writers and philosophers have drawn a strong connection between language and thought. Oscar Wilde called language “the parent, and not the child, of thought.” Ludwig Wittgenstein claimed that “the limits of my language mean the limits of my world.” And Bertrand Russell stated that the role of language is “to make possible thoughts which could not exist without it.” Given this view, Sue should have irreparable damage to her cognitive abilities when she loses access to language. Do neuroscientists agree? Not quite.

Neuroimaging evidence has revealed a specialized set of regions within the human brain that respond strongly and selectively to language.

This language system seems to be distinct from regions that are linked to our ability to plan, remember, reminisce on past and future, reason in social situations, experience empathy, make moral decisions, and construct one’s self-image. Thus, vast portions of our everyday cognitive experiences appear to be unrelated to language per se.

But what about Sue? Can she really think the way we do?

While we cannot directly measure what it’s like to think like a neurotypical adult, we can probe Sue’s cognitive abilities by asking her to perform a variety of different tasks. Turns out, patients with global aphasia can solve arithmetic problems, reason about intentions of others, and engage in complex causal reasoning tasks. They can tell whether a drawing depicts a real-life event and laugh when in doesn’t. Some of them play chess in their spare time. Some even engage in creative tasks – a composer Vissarion Shebalin continued to write music even after a stroke that left him severely aphasic.

Some readers might find these results surprising, given that their own thoughts seem to be tied to language so closely. If you find yourself in that category, I have a surprise for you – research has established that not everybody has inner speech experiences. A bilingual friend of mine sometimes gets asked if she thinks in English or Polish, but she doesn’t quite get the question (“how can you think in a language?”). Another friend of mine claims that he “thinks in landscapes,” a sentiment that conveys the pictorial nature of some people’s thoughts. Therefore, even inner speech does not appear to be necessary for thought.

Have we solved the mystery then? Can we claim that language and thought are completely independent and Bertrand Russell was wrong? Only to some extent. We have shown that damage to the language system within an adult human brain leaves most other cognitive functions intact. However, when it comes to the language-thought link across the entire lifespan, the picture is far less clear. While available evidence is scarce, it does indicate that some of the cognitive functions discussed above are, at least to some extent, acquired through language.

Perhaps the clearest case is numbers. There are certain tribes around the world whose languages do not have number words – some might only have words for one through five (Munduruku), and some won’t even have those (Pirahã). Speakers of Pirahã have been shown to make mistakes on one-to-one matching tasks (“get as many sticks as there are balls”), suggesting that language plays an important role in bootstrapping exact number manipulations.

Another way to examine the influence of language on cognition over time is by studying cases when language access is delayed. Deaf children born into hearing families often do not get exposure to sign languages for the first few months or even years of life; such language deprivation has been shown to impair their ability to engage in social interactions and reason about the intentions of others. Thus, while the language system may not be directly involved in the process of thinking, it is crucial for acquiring enough information to properly set up various cognitive domains.

Even after her stroke, our patient Sue will have access to a wide range of cognitive abilities. She will be able to think by drawing on neural systems underlying many non-linguistic skills, such as numerical cognition, planning, and social reasoning. It is worth bearing in mind, however, that at least some of those systems might have relied on language back when Sue was a child. While the static view of the human mind suggests that language and thought are largely disconnected, the dynamic view hints at a rich nature of language-thought interactions across development.


Do you have a question for The Brain? Ask it here.

3Q: The interface between art and neuroscience

CBMM postdoc Sarah Schwettman

Computational neuroscientist Sarah Schwettmann, who works in the Center for Brains, Minds, and Machines at the McGovern Institute, is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience.

Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM ’89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting.

Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?

A: Discussions around this intersection often consider what each field has to offer the other. We take a different approach, one I refer to as occupying the gap, or positioning ourselves between the two fields and asking what essential questions underlie them both. One question addresses the nature of the human relationship to the world. The course suggests one answer: This relationship is fundamentally creative, from the brain’s interpretation of incoming sensory data in perception, to the explicit construction of experiential worlds in art.

Neuroscience and art, therefore, each provide a set of tools for investigating different levels of the constructive process. Through neuroscience, we develop a specific understanding of the models of the world that the brain uses to make sense of incoming visual data. With articulation of those models, we can engineer types of inputs that interact with visual processing architecture in particularly exquisite ways, and do so reliably, giving artists a toolkit for remixing and modulating experience. In the studio component of the course, we experiment with this toolkit and collectively move it forward.

While designing the course, Pawan, Seth, and I found that we were each addressing a similar set of questions, the same that motivate the class, through our own research and practice. In parallel to computational vision research, Professor Sinha leads a humanitarian initiative called Project Prakash, which provides treatment to blind children in India and explores the development of vision following the restoration of sight. Where does structure in perception originate? As an artist in the MIT Museum Studio, Seth works with articulated light to sculpt structured visual worlds out of darkness. I also live on this interface where the brain meets the world — my research in the Department of Brain and Cognitive Sciences examines the neural basis of mental models for simulating physics. Linking our work in the course is an experiment in synthesis.

Q: What current research in vision, neuroscience, and art are being explored at MIT, and how does the class connect it to hands-on practice?

A: Our brains build a rich world of experience and expectation from limited and noisy sensory data with infinite potential interpretations. In perception research, we seek to discover how the brain finds more meaning in incoming data than is explained by the signal alone. Work being done at MIT around generative models addresses this, for instance in the labs of Josh Tenenbaum and Josh McDermott in the Department of Brain and Cognitive Sciences. Researchers present an ambiguous visual or auditory stimulus and by probing someone’s perceptual interpretation, they get a handle on the structures that the mind generates to interpret incoming data, and they can begin to build computational models of the process.

In Vision in Art and Neuroscience, we focus on the experiential as well as the experimental, probing the perceiver’s experience of structure-generating process—perceiving perception itself.

As instructors, we face the pedagogical question: what exercises, in the studio, can evoke so striking an experience of students’ own perception that cutting edge research takes on new meaning, understood in the immediacy of seeing? Later in the semester, students face a similar question as artists: How can one create visual environments where viewers experience their own perceptual processing at work? Done well, this experience becomes the artwork itself. Early in the course, students explore the Ganzfeld effect, popularized by artist James Turrell, where the viewer is exposed to an unstructured visual field of uniform illumination. In this experience, one feels the mind struggling to fit models of the world to unstructured input, and attempting this over and over again — an interpretation process which often goes unnoticed when input structure is expected by visual processing architecture. The progression of the course modules follows the hierarchy of visual processing in the brain, which builds increasingly complex interpretations of visual inputs, from brightness and edges to depth, color, and recognizable form.

MIT students first encounter those concepts in the seminar component of the course at the beginning of each week. Later in the week, students translate findings into experimental approaches in the studio. We work with light directly, from introducing a single pinpoint of light into an otherwise completely dark room, to building intricate environments using programmable electronics. Students begin to take this work into their own hands, in small groups and individually, culminating in final projects for exhibition. These exhibitions are truly a highlight of the course. They’re often one of the first times that students have built and shown artworks. That’s been a gift to share with the broader MIT community, and a great learning experience for students and instructors alike.

Q: How has that approach been received by the MIT community?

A: What we’re doing has resonated across disciplines: In addition to neuroscience, we have students and researchers joining us from computer science, mechanical engineering, mathematics, the Media Lab, and ACT (the Program in Art, Culture, and Technology). The course is growing into something larger, a community of practice interested in applying the scientific methodology we develop to study the world, to probe experience, and to articulate models for its generation and replication.

With a mix of undergraduates, graduates, faculty, and artists, we’ve put together installations and symposia — including three on campus so far. The first of these, “Perceiving Perception,” also led to a weekly open studio night where students and collaborators convene for project work. Our second exhibition, “Dessert of the Real,” is on display this spring in the Compton Gallery. This April we’re organizing a symposium in the studio featuring neuroscientists, computer scientists, artists and researchers from MIT and Harvard. We’re reaching beyond campus as well, through off-site installations, collaborations with museums — including the Metropolitan Museum of Art and the Peabody Essex Museum — and a partnership with the ZERO Group in Germany.

We’re eager to involve a broad network of collaborators. It’s an exciting moment in the fields of neuroscience and computing; there is great energy to build technologies that perceive the world like humans do. We stress on the first day of class that perception is a fundamentally creative act. We see the potential for models of perception to themselves be tools for scaling and translating creativity across domains, and for building a deeply creative relationship to our environment.