What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Perception of musical pitch varies across cultures

People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers — say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.

This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).

“We’re finding that … there seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” says McDermott.

The findings suggest that although there is a natural mathematical relationship between the frequencies of every “C,” no matter what octave it’s played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT’s Department of Brain and Cognitive Sciences.

“It may well be that there is a biological predisposition to favor octave relationships, but it doesn’t seem to be realized unless you are exposed to music in an octave-based system,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.

The study also found that members of the Bolivian tribe, known as the Tsimane’, and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.

McDermott is the senior author of the study, which appears in the journal Current Biology on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper’s lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.

Octaves apart

Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott’s lab has performed several such studies with the participation of Tsimane’ tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.

In a study published in 2016, McDermott and his colleagues found that Westerners and Tsimane’ had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane’ listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.

Later, Jacoby and McDermott found that both Westerners and Tsimane’ are drawn to musical rhythms composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.

In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.

pitch perception experiment
Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile, runs a musical pitch perception experiment with a member of the Tsimane’ tribe of the Bolivian rainforest. Photo: Josh McDermott

Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.

Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane’ did not.

“The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane’ didn’t have any relationship to the absolute pitch of the stimulus,” Jacoby says. “That’s consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important.”

The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.

Limits of perception

The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.

People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.

The researchers found that although Tsimane’ musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane’ listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.

“It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch,” Jacoby says.

One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can’t keep up and we lose a critical cue with which to distinguish different frequencies.

“The new study contributes to the age-long debate about the interplays between culture and biological constraints in music,” says Daniel Pressnitzer, a senior research scientist at Paris Descartes University, who was not involved in the research. “This unique, precious, and extensive dataset demonstrates both striking similarities and unexpected differences in how Tsimane’ and Western listeners perceive or conceive musical pitch.”

Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane’.

Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.

“We’re finding that there are some cross-cultural similarities, but there also seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” McDermott says. “These differences in experience can lead to dissociations of different aspects of perception, giving you clues to what the parts of the perceptual system are.”

The research was funded by the James S. McDonnell Foundation, the National Institutes of Health, and the Presidential Scholar in Society and Neuroscience Program at Columbia University.

Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”

Benefits of mindfulness for middle schoolers

Two new studies from investigators at the McGovern Institute at MIT suggest that mindfulness — the practice of focusing one’s awareness on the present moment — can enhance academic performance and mental health in middle schoolers. The researchers found that more mindfulness correlates with better academic performance, fewer suspensions from school, and less stress.

“By definition, mindfulness is the ability to focus attention on the present moment, as opposed to being distracted by external things or internal thoughts. If you’re focused on the teacher in front of you, or the homework in front of you, that should be good for learning,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

The researchers also showed, for the first time, that mindfulness training can alter brain activity in students. Sixth-graders who received mindfulness training not only reported feeling less stressed, but their brain scans revealed reduced activation of the amygdala, a brain region that processes fear and other emotions, when they viewed images of fearful faces.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” Gabrieli says. “It’s a form of mental exercise that needs to be sustained.”

Together, the findings suggest that offering mindfulness training in schools could benefit many students, says Gabrieli, who is the senior author of both studies.

“We think there is a reasonable possibility that mindfulness training would be beneficial for children as part of the daily curriculum in their classroom,” he says. “What’s also appealing about mindfulness is that there are pretty well-established ways of teaching it.”

In the moment

Both studies were performed at charter schools in Boston. In one of the papers, which appears today in the journal Behavioral Neuroscience, the MIT team studied about 100 sixth-graders. Half of the students received mindfulness training every day for eight weeks, while the other half took a coding class. The mindfulness exercises were designed to encourage students to pay attention to their breath, and to focus on the present moment rather than thoughts of the past or the future.

Students who received the mindfulness training reported that their stress levels went down after the training, while the students in the control group did not. Students in the mindfulness training group also reported fewer negative feelings, such as sadness or anger, after the training.

About 40 of the students also participated in brain imaging studies before and after the training. The researchers measured activity in the amygdala as the students looked at pictures of faces expressing different emotions.

At the beginning of the study, before any training, students who reported higher stress levels showed more amygdala activity when they saw fearful faces. This is consistent with previous research showing that the amygdala can be overactive in people who experience more stress, leading them to have stronger negative reactions to adverse events.

“There’s a lot of evidence that an overly strong amygdala response to negative things is associated with high stress in early childhood and risk for depression,” Gabrieli says.

After the mindfulness training, students showed a smaller amygdala response when they saw the fearful faces, consistent with their reports that they felt less stressed. This suggests that mindfulness training could potentially help prevent or mitigate mood disorders linked with higher stress levels, the researchers say.

Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin, says that the findings suggest there could be great benefit to implementing mindfulness training in middle schools.

“This is really one of the very first rigorous studies with children of that age to demonstrate behavioral and neural benefits of a simple mindfulness training,” says Davidson, who was not involved in the study.

Evaluating mindfulness

In the other paper, which appeared in the journal Mind, Brain, and Education in June, the researchers did not perform any mindfulness training but used a questionnaire to evaluate mindfulness in more than 2,000 students in grades 5-8. The questionnaire was based on the Mindfulness Attention Awareness Scale, which is often used in mindfulness studies on adults. Participants are asked to rate how strongly they agree with statements such as “I rush through activities without being really attentive to them.”

The researchers compared the questionnaire results with students’ grades, their scores on statewide standardized tests, their attendance rates, and the number of times they had been suspended from school. Students who showed more mindfulness tended to have better grades and test scores, as well as fewer absences and suspensions.

“People had not asked that question in any quantitative sense at all, as to whether a more mindful child is more likely to fare better in school,” Gabrieli says. “This is the first paper that says there is a relationship between the two.”

The researchers now plan to do a full school-year study, with a larger group of students across many schools, to examine the longer-term effects of mindfulness training. Shorter programs like the two-month training used in the Behavioral Neuroscience study would most likely not have a lasting impact, Gabrieli says.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” he says. “It’s a form of mental exercise that needs to be sustained.”

The research was funded by the Walton Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute for Brain Research, and the National Council of Science and Technology of Mexico. Camila Caballero ’13, now a graduate student at Yale University, is the lead author of the Mind, Brain, and Education study. Caballero and MIT postdoc Clemens Bauer are lead authors of the Behavioral Neuroscience study. Additional collaborators were from the Harvard Graduate School of Education, Transforming Education, Boston Collegiate Charter School, and Calmer Choice.

Speaking many languages

Ev Fedorenko studies the cognitive processes and brain regions underlying language, a signature cognitive skill that is uniquely and universally human. She investigates both people with linguistic impairments, and those that have exceptional language skills: hyperpolyglots, or people that are fluent in over a dozen languages. Indeed, she was recently interviewed for a BBC documentary about superlinguists as well as the New Yorker, for an article covering people with exceptional language skills.

When Fedorenko, an associate investigator at the McGovern Institute and assistant professor in the Department of Brain and Cognitive Sciences at MIT, came to the field, neuroscientists were still debating whether high-level cognitive skills such as language, are processed by multi-functional or dedicated brain regions. Using fMRI, Fedorenko and colleagues compared engagement of brain regions when individuals were engaged in linguistic vs. other high level cognitive tasks, such as arithmetic or music. Their data revealed a clear distinction between language and other cognitive processes, showing that our brains have dedicated language regions.

Here is my basic question. How do I get a thought from my mind into yours?

In the time since this key study, Fedorenko has continued to unpack language in the brain. How does the brain process the overarching rules and structure of language (syntax), as opposed to meanings of words? How do we construct complex meanings? What might underlie communicative difficulties in individuals diagnosed with autism? How does the aphasic brain recover language? Intriguingly, in contrast to individuals with linguistic difficulties, there are also individuals that stand out as being able to master many languages, so-called hyperpolyglots.

In 2013, she came across a young adult that had mastered over 30 languages, a prodigy in languages. To facilitate her analysis of processing of different languages Fedorenko has collected dozens of translations of Alice in Wonderland, for her ‘Alice in the language localizer Wonderland‘ project. She has already found that hyperpolyglots tend to show less activity in linguistic processing regions when reading in, or listening to, their native language, compared to carefully matched controls, perhaps indexing more efficient processing mechanisms. Fedorenko continues to study hyperpolyglots, along with other exciting new avenues of research. Stay tuned for upcoming advances in our understanding of the brain and language.

Evelina Fedorenko

Exploring Language

Evelina (Ev) Fedorenko aims to understand how the language system works in the brain. Her lab is unpacking the internal architecture of the brain’s language system and exploring the relationship between language and various cognitive, perceptual, and motor systems. To do this, her lab employs a range of approaches – from brain imaging to computational modeling – and works with a diverse populations, including polyglots and individuals with atypical brains. Language is a quintessential human ability, but the function that language serves has been debated for centuries. Fedorenko argues that language serves is primarily as a tool for communication, contrary to a prominent view that language is essential for thinking.

Ultimately, this cutting-edge work is uncovering the computations and representations that fuel language processing in the brain.

How we tune out distractions

Imagine trying to focus on a friend’s voice at a noisy party, or blocking out the phone conversation of the person sitting next to you on the bus while you try to read. Both of these tasks require your brain to somehow suppress the distracting signal so you can focus on your chosen input.

MIT neuroscientists have now identified a brain circuit that helps us to do just that. The circuit they identified, which is controlled by the prefrontal cortex, filters out unwanted background noise or other distracting sensory stimuli. When this circuit is engaged, the prefrontal cortex selectively suppresses sensory input as it flows into the thalamus, the site where most sensory information enters the brain.

“This is a fundamental operation that cleans up all the signals that come in, in a goal-directed way,” says Michael Halassa, an assistant professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers are now exploring whether impairments of this circuit may be involved in the hypersensitivity to noise and other stimuli that is often seen in people with autism.

Miho Nakajima, an MIT postdoc, is the lead author of the paper, which appears in the June 12 issue of Neuron. Research scientist L. Ian Schmitt is also an author of the paper.

Shifting attention

Our brains are constantly bombarded with sensory information, and we are able to tune out much of it automatically, without even realizing it. Other distractions that are more intrusive, such as your seatmate’s phone conversation, require a conscious effort to suppress.

In a 2015 paper, Halassa and his colleagues explored how attention can be consciously shifted between different types of sensory input, by training mice to switch their focus between a visual and auditory cue. They found that during this task, mice suppress the competing sensory input, allowing them to focus on the cue that will earn them a reward.

This process appeared to originate in the prefrontal cortex (PFC), which is critical for complex cognitive behavior such as planning and decision-making. The researchers also found that a part of the thalamus that processes vision was inhibited when the animals were focusing on sound cues. However, there are no direct physical connections from the prefrontal cortex to the sensory thalamus, so it was unclear exactly how the PFC was exerting this control, Halassa says.

In the new study, the researchers again trained mice to switch their attention between visual and auditory stimuli, then mapped the brain connections that were involved. They first examined the outputs of the PFC that were essential for this task, by systematically inhibiting PFC projection terminals in every target. This allowed them to discover that the PFC connection to a brain region known as the striatum is necessary to suppress visual input when the animals are paying attention to the auditory cue.

Further mapping revealed that the striatum then sends input to a region called the globus pallidus, which is part of the basal ganglia. The basal ganglia then suppress activity in the part of the thalamus that processes visual information.

Using a similar experimental setup, the researchers also identified a parallel circuit that suppresses auditory input when animals pay attention to the visual cue. In that case, the circuit travels through parts of the striatum and thalamus that are associated with processing sound, rather than vision.

The findings offer some of the first evidence that the basal ganglia, which are known to be critical for planning movement, also play a role in controlling attention, Halassa says.

“What we realized here is that the connection between PFC and sensory processing at this level is mediated through the basal ganglia, and in that sense, the basal ganglia influence control of sensory processing,” he says. “We now have a very clear idea of how the basal ganglia can be involved in purely attentional processes that have nothing to do with motor preparation.”

Noise sensitivity

The researchers also found that the same circuits are employed not only for switching between different types of sensory input such as visual and auditory stimuli, but also for suppressing distracting input within the same sense — for example, blocking out background noise while focusing on one person’s voice.

The team also showed that when the animals are alerted that the task is going to be noisy, their performance actually improves, as they use this circuit to focus their attention.

“This study uses a dazzling array of techniques for neural circuit dissection to identify a distributed pathway, linking the prefrontal cortex to the basal ganglia to the thalamic reticular nucleus, that allows the mouse brain to enhance relevant sensory features and suppress distractors at opportune moments,” says Daniel Polley, an associate professor of otolaryngology at Harvard Medical School, who was not involved in the research. “By paring down the complexities of the sensory stimulus only to its core relevant features in the thalamus — before it reaches the cortex — our cortex can more efficiently encode just the essential features of the sensory world.”

Halassa’s lab is now doing similar experiments in mice that are genetically engineered to develop symptoms similar to those of people with autism. One common feature of autism spectrum disorder is hypersensitivity to noise, which could be caused by impairments of this brain circuit, Halassa says. He is now studying whether boosting the activity of this circuit might reduce sensitivity to noise.

“Controlling noise is something that patients with autism have trouble with all the time,” he says. “Now there are multiple nodes in the pathway that we can start looking at to try to understand this.”

The research was funded by the National Institutes of Mental Health, the National Institute of Neurological Disorders and Stroke, the Simons Foundation, the Alfred P. Sloan Foundation, the Esther A. and Joseph Klingenstein Fund, and the Human Frontier Science Program.

McGovern Institute postcard collection

A collection of 13 postcards arranged in columns.
The McGovern Institute postcard collection, 2023.

The McGovern Institute may be best known for its scientific breakthroughs, but a captivating series of brain-themed postcards developed by McGovern researchers and staff now reveals the institute’s artistic side.

What began in 2017 with a series of brain anatomy postcards inspired by the U.S. Works Projects Administration’s iconic national parks posters, has grown into a collection of twelve different prints, each featuring a unique fusion of neuroscience and art.

More information about each series in the McGovern Institute postcard collection, including the color-your-own mindfulness postcards, can be found below.

Mindfulness Postcard Series, 2023

In winter 2023, the institute released its mindfulness postcard series, a collection of four different neuroscience-themed illustrations that can be colored in with pencils, markers, or paint. The postcard series was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness — focusing awareness on the present, typically through meditation, but also through coloring — can change patterns of brain activity associated with emotions and mental health.

Download and color your own postcards.

Genes

The McGovern Institute is at the cutting edge of applications based on CRISPR, a genome editing tool pioneered by McGovern Investigator Feng Zhang. Hidden within this DNA-themed postcard is a clam, virus, bacteriophage, snail, and the word CRISPR. Click the links to learn how these hidden elements relate to genetic engineering research at the McGovern Institute.

 

Line art showing strands of DNA and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this DNA-themed illustration containing five hidden design elements related to McGovern research. Image: Joseph Laney

Neurons

McGovern researchers probe the nanoscale and cellular processes that are critical to brain function, including the complex computations conducted in neurons, to the synapses and neurotransmitters that facilitate messaging between cells. Find the mouse, worm, and microscope — three critical elements related to cellular and molecular neuroscience research at the McGovern Institute — in the postcard below.

 

Line art showing multiple neurons and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this neuron-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Human Brain

Cognitive neuroscientists at the McGovern Institute examine the brain processes that come together to inform our thoughts and understanding of the world.​ Find the musical note, speech bubbles, and human face in this postcard and click on the links to learn more about how these hidden elements relate to brain research at the McGovern Institute.

 

Line art of a human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this brain-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Artificial Intelligence

McGovern researchers develop machine learning systems that mimic human processing of visual and auditory cues and construct algorithms to help us understand the complex computations made by the brain. Find the speech bubbles, DNA, and cochlea (spiral) in this postcard and click on the links to learn more about how these hidden elements relate to computational neuroscience research at the McGovern Institute.

Line art showing an artificial neural network in the shape of the human brain and the McGovern Institute logo.
The McGovern Institute’s “mindfulness” postcard series includes this AI-themed illustration containing three hidden design elements related to McGovern research. Image: Joseph Laney

Neuron Postcard Series, 2019

In 2019, the McGovern Institute released a second series of postcards based on the anatomy of a neuron. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown beloSynapse

Synapse

Snow melting off the branch of a bush at the water's edge creates a ripple effect in the pool of water below. Words at the bottom of the image say "It All Begins at the SYNAPSE"Signals flow through the nervous system from one neuron to the next across synapses.

Synapses are exquisitely organized molecular machines that control the transmission of information.

McGovern researchers are studying how disruptions in synapse function can lead to brain disorders like autism.

Image: Joseph Laney

Axon

Illustration of three bears hunting for fish in a flowing river with the words: "Axon: Where Action Finds Potential"The axon is the long, thin neural cable that carries electrical impulses called action potentials from the soma to synaptic terminals at downstream neurons.

Researchers at the McGovern Institute are developing and using tracers that label axons to reveal the elaborate circuit architecture of the brain.

Image: Joseph Laney

Soma

An elk stands on a rocky outcropping overlooking a large lake with an island in the center. Words at the top read: "Collect Your Thoughts at the Soma"The soma, or cell body, is the control center of the neuron, where the nucleus is located.

It connects the dendrites to the axon, which sends information to other neurons.

At the McGovern Institute, neuroscientists are targeting the soma with proteins that can activate single neurons and map connections in the brain.

Image: Joseph Laney

Dendrites

A mountain lake at sunset with colorful fish and snow from a distant mountaintop melting into the lake. Words say "DENDRITIC ARBOR"Long branching neuronal processes called dendrites receive synaptic inputs from thousands of other neurons and carry those signals to the cell body.

McGovern neuroscientists have discovered that human dendrites have different electrical properties from those of other species, which may contribute to the enhanced computing power of the human brain.

Image: Joseph Laney

Brain Anatomy Postcard Series, 2017

The original brain anatomy-themed postcard series, developed in 2017, was inspired by the U.S. Works Projects Administration’s iconic national parks posters created in the 1930s and 1940s. Each postcard includes text on the back side that describes McGovern research related to that specific part of the neuron. The descriptive text for each postcard is shown below.

Sylvian Fissure

Illustration of explorer in cave labeled with temporal and parietal letters
The Sylvian fissure is a prominent groove on the right side of the brain that separates the frontal and parietal lobes from the temporal lobe. McGovern researchers are studying a region near the right Sylvian fissure, called the rTPJ, which is involved in thinking about what another person is thinking.

Hippocampus

The hippocampus, named after its resemblance to the seahorse, plays an important role in memory. McGovern researchers are studying how changes in the strength of synapses (connections between neurons) in the hippocampus contribute to the formation and retention of memories.

Basal Ganglia

The basal ganglia are a group of deep brain structures best known for their control of movement. McGovern researchers are studying how the connections between the cerebral cortex and a part of the basal ganglia known as the striatum play a role in emotional decision making and motivation.

 

 

 

The arcuate fasciculus is a bundle of axons in the brain that connects Broca’s area, involved in speech production, and Wernicke’s area, involved in understanding language. McGovern researchers have found a correlation between the size of this structure and the risk of dyslexia in children.

 

 

Order and Share

To order your own McGovern brain postcards, contact our colleagues at the MIT Museum, where proceeds will support current and future exhibitions at the growing museum.

Please share a photo of yourself in your own lab (or natural habitat) with one of our cards on social media. Tell us what you’re studying and don’t forget to tag us @mcgovernmit using the hashtag #McGovernPostcards.

How we make complex decisions

When making a complex decision, we often break the problem down into a series of smaller decisions. For example, when deciding how to treat a patient, a doctor may go through a hierarchy of steps — choosing a diagnostic test, interpreting the results, and then prescribing a medication.

Making hierarchical decisions is straightforward when the sequence of choices leads to the desired outcome. But when the result is unfavorable, it can be tough to decipher what went wrong. For example, if a patient doesn’t improve after treatment, there are many possible reasons why: Maybe the diagnostic test is accurate only 75 percent of the time, or perhaps the medication only works for 50 percent of the patients. To decide what do to next, the doctor must take these probabilities into account.

In a new study, MIT neuroscientists explored how the brain reasons about probable causes of failure after a hierarchy of decisions. They discovered that the brain performs two computations using a distributed network of areas in the frontal cortex. First, the brain computes confidence over the outcome of each decision to figure out the most likely cause of a failure, and second, when it is not easy to discern the cause, the brain makes additional attempts to gain more confidence.

“Creating a hierarchy in one’s mind and navigating that hierarchy while reasoning about outcomes is one of the exciting frontiers of cognitive neuroscience,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

MIT graduate student Morteza Sarafyzad is the lead author of the paper, which appears in Science on May 16.

Hierarchical reasoning

Previous studies of decision-making in animal models have focused on relatively simple tasks. One line of research has focused on how the brain makes rapid decisions by evaluating momentary evidence. For example, a large body of work has characterized the neural substrates and mechanisms that allow animals to categorize unreliable stimuli on a trial-by-trial basis. Other research has focused on how the brain chooses among multiple options by relying on previous outcomes across multiple trials.

“These have been very fruitful lines of work,” Jazayeri says. “However, they really are the tip of the iceberg of what humans do when they make decisions. As soon as you put yourself in any real decision-making situation, be it choosing a partner, choosing a car, deciding whether to take this drug or not, these become really complicated decisions. Oftentimes there are many factors that influence the decision, and those factors can operate at different timescales.”

The MIT team devised a behavioral task that allowed them to study how the brain processes information at multiple timescales to make decisions. The basic design was that animals would make one of two eye movements depending on whether the time interval between two flashes of light was shorter or longer than 850 milliseconds.

A twist required the animals to solve the task through hierarchical reasoning: The rule that determined which of the two eye movements had to be made switched covertly after 10 to 28 trials. Therefore, to receive reward, the animals had to choose the correct rule, and then make the correct eye movement depending on the rule and interval. However, because the animals were not instructed about the rule switches, they could not straightforwardly determine whether an error was caused because they chose the wrong rule or because they misjudged the interval.

The researchers used this experimental design to probe the computational principles and neural mechanisms that support hierarchical reasoning. Theory and behavioral experiments in humans suggest that reasoning about the potential causes of errors depends in large part on the brain’s ability to measure the degree of confidence in each step of the process. “One of the things that is thought to be critical for hierarchical reasoning is to have some level of confidence about how likely it is that different nodes [of a hierarchy] could have led to the negative outcome,” Jazayeri says.

The researchers were able to study the effect of confidence by adjusting the difficulty of the task. In some trials, the interval between the two flashes was much shorter or longer than 850 milliseconds. These trials were relatively easy and afforded a high degree of confidence. In other trials, the animals were less confident in their judgments because the interval was closer to the boundary and difficult to discriminate.

As they had hypothesized, the researchers found that the animals’ behavior was influenced by their confidence in their performance. When the interval was easy to judge, the animals were much quicker to switch to the other rule when they found out they were wrong. When the interval was harder to judge, the animals were less confident in their performance and applied the same rule a few more times before switching.

“They know that they’re not confident, and they know that if they’re not confident, it’s not necessarily the case that the rule has changed. They know they might have made a mistake [in their interval judgment],” Jazayeri says.

Decision-making circuit

By recording neural activity in the frontal cortex just after each trial was finished, the researchers were able to identify two regions that are key to hierarchical decision-making. They found that both of these regions, known as the anterior cingulate cortex (ACC) and dorsomedial frontal cortex (DMFC), became active after the animals were informed about an incorrect response. When the researchers analyzed the neural activity in relation to the animals’ behavior, it became clear that neurons in both areas signaled the animals’ belief about a possible rule switch. Notably, the activity related to animals’ belief was “louder” when animals made a mistake after an easy trial, and after consecutive mistakes.

The researchers also found that while these areas showed similar patterns of activity, it was activity in the ACC in particular that predicted when the animal would switch rules, suggesting that ACC plays a central role in switching decision strategies. Indeed, the researchers found that direct manipulation of neural activity in ACC was sufficient to interfere with the animals’ rational behavior.

“There exists a distributed circuit in the frontal cortex involving these two areas, and they seem to be hierarchically organized, just like the task would demand,” Jazayeri says.

Daeyeol Lee, a professor of neuroscience, psychology, and psychiatry at Yale School of Medicine, says the study overcomes what has been a major obstacle in studying this kind of decision-making, namely, a lack of animal models to study the dynamics of brain activity at single-neuron resolution.

“Sarafyazd and Jazayeri have developed an elegant decision-making task that required animals to evaluate multiple types of evidence, and identified how the two separate regions in the medial frontal cortex are critically involved in handling different sources of errors in decision making,” says Lee, who was not involved in the research. “This study is a tour de force in both rigor and creativity, and peels off another layer of mystery about the prefrontal cortex.”