McGovern lab manager creates art inspired by science

Michal De-Medonsa, technical associate and manager of the Jazayeri lab, created a large wood mosaic for her lab. We asked Michal to tell us a bit about the mosaic, her inspiration, and how in the world she found the time to create such an exquisitely detailed piece of art.

______

Jazayeri lab manager Michal De-Medonsa holds her wood mosaic entitled “JazLab.” Photo: Caitlin Cunningham

Describe this piece of art for us.

To make a piece this big (63″ x 15″), I needed several boards of padauk wood. I could have just etched each board as a whole unit and glued the 13 or so boards to each other, but I didn’t like the aesthetic. The grain and color within each board would look beautiful, but the line between each board would become obvious, segmented, and jarring when contrasted with the uniformity within each board. Instead, I cut out about 18 separate squares out of each board, shuffled all 217 pieces around, and glued them to one another in a mosaic style with a larger pattern (inspired by my grandfather’s work in granite mosaics).

What does this mosaic mean to you?

Once every piece was shuffled, the lines between single squares were certainly visible, but as a feature, were far less salient than had the full boards been glued to one another. As I was working on the piece, I was thinking about how the same concept holds true in society. Even if there is diversity within a larger piece (an institution, for example), there is a tendency for groups to form within the larger piece (like a full board), diversity becomes separated. This isn’t a criticism of any institution, it is human nature to form in-groups. It’s subconscious (so perhaps the criticism is that we, as a society, don’t give that behavior enough thought and try to ameliorate our reflex to group with those who are “like us”). The grain of the wood is uniform, oriented in the same direction, the two different cutting patterns create a larger pattern within the piece, and there are smaller patterns between and within single pieces. I love creating and finding patterns in my art (and life). Alfred North Whitehead wrote that “understanding is the apperception of pattern as such.” True, I believe, in science, art, and the humanities. What a great goal – to understand.​

Tell us about the name of this piece.

Every large piece I make is inspired by the people I make it for, and is therefore named after them. This piece is called JazLab. Having lived around the world, and being a descendant of a nomadic people, I don’t consider any one place home, but am inspired by every place I’ve lived. In all of my work, you can see elements of my Jewish heritage, antiquity, the Middle East, Africa, and now MIT.

How has MIT influenced your art?

MIT has influenced me in the most obvious way MIT could influence anyone – technology. Before this series, I made very small versions of this type of work, designing everything on a piece of paper with a pencil and a ruler, and making every cut by hand. Each of those small squares would take ~2 hours (depending on the design), and I was limited to softer woods.

Since coming to MIT, I learned that I had access to the Hobby Shop with a huge array of power tools and software. I began designing my patterns on the computer and used power tools to make the cuts. I actually struggled a lot with using the tech – not because it was hard (which, it really is when you just start out), but rather because it felt like I was somehow “cheating.” How is this still art? And although this is something I still think about often, I’ve tried to look at it in this way: every generation, in their time, used the most advanced technology. The beauty and value of the piece doesn’t come from how many bruises, cuts, and blisters your machinery gave you, or whether you scraped the wood out with your nails, but rather, once you were given a tool, what did you decide to do with it? My pieces still have a huge hand-on-material work, but I am working on accepting that using technology in no way devalues the work.

Given your busy schedule with the Jazayeri lab, how did you find the time to create this piece of art?

I took advantage of any free hour I could. Two days out of the week, the hobby shop is open until 9pm, and I would additionally go every Saturday. For the parts that didn’t require the shop (adjusting each piece individually with a carving knife, assembling them, even most of the glueing) I would just work  at home – often very late into the night.

______

JazLab is on display in the Jazayeri lab in MIT Bldg 46.

Brain biomarkers predict mood and attention symptoms

Mood and attentional disorders amongst teens are an increasing concern, for parents, society, and for peers. A recent Pew research center survey found conditions such as depression and anxiety to be the number one concern that young students had about their friends, ranking above drugs or bullying.

“We’re seeing an epidemic in teen anxiety and depression,” explains McGovern Research Affiliate Susan Whitfield-Gabrieli.

“Scientists are finding a huge increase in suicide ideation and attempts, something that hit home for me as a mother of teens. Emergency rooms in hospitals now have guards posted outside doors of these teenagers that attempted suicide—this is a pressing issue,” explains Whitfield-Gabrieli who is also director of the Northeastern University Biomedical Imaging Center and a member of the Poitras Center for Psychiatric Disorders Research.

Finding new methods for discovering early biomarkers for risk of psychiatric disorders would allow early interventions and avoid reaching points of crisis such as suicide ideation or attempts. In research published recently in JAMA Psychiatry, Whitfield-Gabrieli and colleagues found that signatures predicting future development of depression and attentional symptoms can be detected in children as young as seven years old.

Long-term view

While previous work had suggested that there may be biomarkers that predict development of mood and attentional disorders, identifying early biomarkers prior to an onset of illness requires following a cohort of pre-teens from a young age, and monitoring them across years. This effort to have a proactive, rather than reactive, approach to the development of symptoms associated with mental disorders is exactly the route Whitfield-Gabrieli and colleagues took.

“One of the exciting aspects of this study is that the cohort is not pre-selected for already having symptoms of psychiatric disorders themselves or even in their family,” explained Whitfield-Gabrieli. “It’s an unbiased cohort that we followed over time.”

McGovern research affiliate Susan Whitfield-Gabrieli has discovered early brain biomarkers linked to psychiatric disorders.

In some past studies, children were pre-selected, for example a major depressive disorder diagnosis in the parents, but Whitfield-Gabrieli and colleagues, Silvia Bunge from Berkeley and Laurie Cutting from Vanderbilt, recruited a range of children without preconditions, and examined them at age 7, then again 4 years later. The researchers examined resting state functional connectivity, and compared this to scores on the child behavioral checklist (CBCL), allowing them to relate differences in the brain to a standardized analysis of behavior that can be linked to psychiatric disorders. The CBCL is used both in research and in the clinic and his highly predictive of disorders including ADHD, so that changes in the brain could be related to changes in a widely used clinical scoring system.

“Over the four years, some people got worse, some got better, and some stayed the same according the CBCL. We could relate this directly to differences in brain networks, and could identify at age 7 who would get worse,” explained Whitfield-Gabrieli.

Brain network changes

The authors analyzed differences in resting state network connectivity, regions across the brain that rise and fall in activity level together, as visualized using fMRI. Reduced connectivity between these regions may allow us to get a handle on reduced “top-down” control of neural circuits. The dorsolateral prefrontal region is linked to executive function, external attention, and emotional control. Increased connection with the medial prefrontal cortex is known to be present in attention deficit hyperactivity disorder (ADHD), while a reduced connection to a different brain region, the sgACC, is seen in major depressive disorder. The question remained as to whether these changes can be seen prior to the onset of diagnosable attentional or mood disorders.

Whitfield-Gabrieli and colleagues found that these resting state networks varied in the brains of children that would later develop anxiety/depression and ADHD symptoms. Weaker scores in connectivity between the dorsolateral and medial prefrontal cortical regions tended to be seen in children whose attention scores went on to improve. Analysis of the resting state networks above could differentiate those who would have typical attentional behavior by age 11 versus those that went on to develop ADHD.

Whitfield-Gabrieli has replicated this finding in an independent sample of children and she is continuing to expand the analysis and check the results, as well as follow this cohort into the future. Should changes in resting state networks be a consistent biomarker, the next step is to initiate interventions prior to the point of crisis.

“We’ve recently been able to use mindfulness interventions, and show these reduce self-perceived stress and amygdala activation in response to fear, and we are also testing the effect of exercise interventions,” explained Whitfield-Gabrieli. “The hope is that by using predictive biomarkers we can augment children’s lifestyles with healthy interventions that can prevent risk converting to a psychiatric disorder.”

Can fMRI reveal insights into addiction and treatments?

Many debilitating conditions like depression and addiction have biological signatures hidden in the brain well before symptoms appear.  What if brain scans could be used to detect these hidden signatures and determine the most optimal treatment for each individual? McGovern Investigator John Gabrieli is interested in this question and wrote about the use of imaging technologies as a predictive tool for brain disorders in a recent issue of Scientific American.

page from Scientific American article
McGovern Investigator John Gabrieli pens a story for Scientific American about the potential for brain imaging to predict the onset of mental illness.

“Brain scans show promise in predicting who will benefit from a given therapy,” says Gabrieli, who is also the Grover Hermann Professor in Brain and Cognitive Sciences at MIT. “Differences in neural activity may one day tell clinicians which depression treatment will be most effective for an individual or which abstinent alcoholics will relapse.”

Gabrieli cites research which has shown that half of patients treated for alcohol abuse go back to drinking within a year of treatment, and similar reversion rates occur for stimulants such as cocaine. Failed treatments may be a source of further anxiety and stress, Gabrieli notes, so any information we can glean from the brain to pinpoint treatments or doses that would help would be highly informative.

Current treatments rely on little scientific evidence to support the length of time needed in a rehabilitation facility, he says, but “a number suggest that brain measures might foresee who will succeed in abstaining after treatment has ended.”

Further data is needed to support this idea, but Gabrieli’s Scientific American piece makes the case that the use of such a technology may be promising for a range of addiction treatments including abuse of alcohol, nicotine, and illicit drugs.

Gabrieli also believes brain imaging has the potential to reshape education. For example, educational interventions targeting dyslexia might be more effective if personalized to specific differences in the brain that point to the source of the learning gap.

But for the prediction sciences to move forward in mental health and education, he concludes, the research community must design further rigorous studies to examine these important questions.

Controlling attention with brain waves

Having trouble paying attention? MIT neuroscientists may have a solution for you: Turn down your alpha brain waves. In a new study, the researchers found that people can enhance their attention by controlling their own alpha brain waves based on neurofeedback they receive as they perform a particular task.

The study found that when subjects learned to suppress alpha waves in one hemisphere of their parietal cortex, they were able to pay better attention to objects that appeared on the opposite side of their visual field. This is the first time that this cause-and-effect relationship has been seen, and it suggests that it may be possible for people to learn to improve their attention through neurofeedback.

Desimone lab study shows that people can boost attention by manipulating their own alpha brain waves with neurofeedback training.

“There’s a lot of interest in using neurofeedback to try to help people with various brain disorders and behavioral problems,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research. “It’s a completely noninvasive way of controlling and testing the role of different types of brain activity.”

It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, such as beta waves, which are linked to Parkinson’s disease. The researchers are now planning additional studies of whether this type of neurofeedback training might help people suffering from attentional or other neurological disorders.

Desimone is the senior author of the paper, which appears in Neuron on Dec. 4. McGovern Institute postdoc Yasaman Bagherzadeh is the lead author of the study. Daniel Baldauf, a former McGovern Institute research scientist, and Dimitrios Pantazis, a McGovern Institute principal research scientist, are also authors of the paper.

Alpha and attention

There are billions of neurons in the brain, and their combined electrical signals generate oscillations known as brain waves. Alpha waves, which oscillate in the frequency of 8 to 12 hertz, are believed to play a role in filtering out distracting sensory information.

Previous studies have shown a strong correlation between attention and alpha brain waves, particularly in the parietal cortex. In humans and in animal studies, a decrease in alpha waves has been linked to enhanced attention. However, it was unclear if alpha waves control attention or are just a byproduct of some other process that governs attention, Desimone says.

To test whether alpha waves actually regulate attention, the researchers designed an experiment in which people were given real-time feedback on their alpha waves as they performed a task. Subjects were asked to look at a grating pattern in the center of a screen, and told to use mental effort to increase the contrast of the pattern as they looked at it, making it more visible.

During the task, subjects were scanned using magnetoencephalography (MEG), which reveals brain activity with millisecond precision. The researchers measured alpha levels in both the left and right hemispheres of the parietal cortex and calculated the degree of asymmetry between the two levels. As the asymmetry between the two hemispheres grew, the grating pattern became more visible, offering the participants real-time feedback.

McGovern postdoc Yasaman sits in a magnetoencephalography (MEG) scanner. Photo: Justin Knight

Although subjects were not told anything about what was happening, after about 20 trials (which took about 10 minutes), they were able to increase the contrast of the pattern. The MEG results indicated they had done so by controlling the asymmetry of their alpha waves.

“After the experiment, the subjects said they knew that they were controlling the contrast, but they didn’t know how they did it,” Bagherzadeh says. “We think the basis is conditional learning — whenever you do a behavior and you receive a reward, you’re reinforcing that behavior. People usually don’t have any feedback on their brain activity, but when we provide it to them and reward them, they learn by practicing.”

Although the subjects were not consciously aware of how they were manipulating their brain waves, they were able to do it, and this success translated into enhanced attention on the opposite side of the visual field. As the subjects looked at the pattern in the center of the screen, the researchers flashed dots of light on either side of the screen. The participants had been told to ignore these flashes, but the researchers measured how their visual cortex responded to them.

One group of participants was trained to suppress alpha waves in the left side of the brain, while the other was trained to suppress the right side. In those who had reduced alpha on the left side, their visual cortex showed a larger response to flashes of light on the right side of the screen, while those with reduced alpha on the right side responded more to flashes seen on the left side.

“Alpha manipulation really was controlling people’s attention, even though they didn’t have any clear understanding of how they were doing it,” Desimone says.

Persistent effect

After the neurofeedback training session ended, the researchers asked subjects to perform two additional tasks that involve attention, and found that the enhanced attention persisted. In one experiment, subjects were asked to watch for a grating pattern, similar to what they had seen during the neurofeedback task, to appear. In some of the trials, they were told in advance to pay attention to one side of the visual field, but in others, they were not given any direction.

When the subjects were told to pay attention to one side, that instruction was the dominant factor in where they looked. But if they were not given any cue in advance, they tended to pay more attention to the side that had been favored during their neurofeedback training.

In another task, participants were asked to look at an image such as a natural outdoor scene, urban scene, or computer-generated fractal shape. By tracking subjects’ eye movements, the researchers found that people spent more time looking at the side that their alpha waves had trained them to pay attention to.

“It is promising that the effects did seem to persist afterwards,” says Desimone, though more study is needed to determine how long these effects might last.

The research was funded by the McGovern Institute.

Drug combination reverses hypersensitivity to noise

People with autism often experience hypersensitivity to noise and other sensory input. MIT neuroscientists have now identified two brain circuits that help tune out distracting sensory information, and they have found a way to reverse noise hypersensitivity in mice by boosting the activity of those circuits.

One of the circuits the researchers identified is involved in filtering noise, while the other exerts top-down control by allowing the brain to switch its attention between different sensory inputs.

The researchers showed that restoring the function of both circuits worked much better than treating either circuit alone. This demonstrates the benefits of mapping and targeting multiple circuits involved in neurological disorders, says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

“We think this work has the potential to transform how we think about neurological and psychiatric disorders, [so that we see them] as a combination of circuit deficits,” says Halassa, the senior author of the study. “The way we should approach these brain disorders is to map, to the best of our ability, what combination of deficits are there, and then go after that combination.”

MIT postdoc Miho Nakajima and research scientist L. Ian Schmitt are the lead authors of the paper, which appears in Neuron on Oct. 21. Guoping Feng, the James W. and Patricia Poitras Professor of Neuroscience and a member of the McGovern Institute, is also an author of the paper.

Hypersensitivity

Many gene variants have been linked with autism, but most patients have very few, if any, of those variants. One of those genes is ptchd1, which is mutated in about 1 percent of people with autism. In a 2016 study, Halassa and Feng found that during development this gene is primarily expressed in a part of the thalamus called the thalamic reticular nucleus (TRN).

That study revealed that neurons of the TRN help the brain to adjust to changes in sensory input, such as noise level or brightness. In mice with ptchd1 missing, TRN neurons fire too fast, and they can’t adjust when noise levels change. This prevents the TRN from performing its usual sensory filtering function, Halassa says.

“Neurons that are there to filter out noise, or adjust the overall level of activity, are not adapting. Without the ability to fine-tune the overall level of activity, you can get overwhelmed very easily,” he says.

In the 2016 study, the researchers also found that they could restore some of the mice’s noise filtering ability by treating them with a drug called EBIO that activates neurons’ potassium channels. EBIO has harmful cardiac side effects so likely could not be used in human patients, but other drugs that boost TRN activity may have a similar beneficial effect on hypersensitivity, Halassa says.

In the new Neuron paper, the researchers delved more deeply into the effects of ptchd1, which is also expressed in the prefrontal cortex. To explore whether the prefrontal cortex might play a role in the animals’ hypersensitivity, the researchers used a task in which mice have to distinguish between three different tones, presented with varying amounts of background noise.

Normal mice can learn to use a cue that alerts them whenever the noise level is going to be higher, improving their overall performance on the task. A similar phenomenon is seen in humans, who can adjust better to noisier environments when they have some advance warning, Halassa says. However, mice with the ptchd1 mutation were unable to use these cues to improve their performance, even when their TRN deficit was treated with EBIO.

This suggested that another brain circuit must be playing a role in the animals’ ability to filter out distracting noise. To test the possibility that this circuit is located in the prefrontal cortex, the researchers recorded from neurons in that region while mice lacking ptch1 performed the task. They found that neuronal activity died out much faster in these mice than in the prefrontal cortex of normal mice. That led the researchers to test another drug, known as modafinil, which is FDA-approved to treat narcolepsy and is sometimes prescribed to improve memory and attention.

The researchers found that when they treated mice missing ptchd1 with both modafinil and EBIO, their hypersensitivity disappeared, and their performance on the task was the same as that of normal mice.

Targeting circuits

This successful reversal of symptoms suggests that the mice missing ptchd1 experience a combination of circuit deficits that each contribute differently to noise hypersensitivity. One circuit filters noise, while the other helps to control noise filtering based on external cues. Ptch1 mutations affect both circuits, in different ways that can be treated with different drugs.

Both of those circuits could also be affected by other genetic mutations that have been linked to autism and other neurological disorders, Halassa says. Targeting those circuits, rather than specific genetic mutations, may offer a more effective way to treat such disorders, he says.

“These circuits are important for moving things around the brain — sensory information, cognitive information, working memory,” he says. “We’re trying to reverse-engineer circuit operations in the service of figuring out what to do about a real human disease.”

He now plans to study circuit-level disturbances that arise in schizophrenia. That disorder affects circuits involving cognitive processes such as inference — the ability to draw conclusions from available information.

The research was funded by the Simons Center for the Social Brain at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the McGovern Institute for Brain Research at MIT, the Pew Foundation, the Human Frontiers Science Program, the National Institutes of Health, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, a Japan Society for the Promotion of Science Fellowship, and a National Alliance for the Research of Schizophrenia and Depression Young Investigator Award.

Word Play

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages.

Language is a uniquely human ability that allows us to build vibrant pictures of non-existent places (think Wonderland or Westeros). How does the brain build mental worlds from words? Can machines do the same? Can we recover this ability after brain injury? These questions require an understanding of how the brain processes language, a fascination for Ev Fedorenko.

“I’ve always been interested in language. Early on, I wanted to found a company that teaches kids languages that share structure — Spanish, French, Italian — in one go,” says Fedorenko, an associate investigator at the McGovern Institute and an assistant professor in brain and cognitive sciences at MIT.

Her road to understanding how thoughts, ideas, emotions, and meaning can be delivered through sound and words became clear when she realized that language was accessible through cognitive neuroscience.

Early on, Fedorenko made a seminal finding that undermined dominant theories of the time. Scientists believed a single network was extracting meaning from all we experience: language, music, math, etc. Evolving separate networks for these functions seemed unlikely, as these capabilities arose recently in human evolution.

Language Regions
Ev Fedorenko has found that language regions of the brain (shown in teal) are sensitive to both word meaning and sentence structure. Image: Ev Fedorenko

But when Fedorenko examined brain activity in subjects while they read or heard sentences in the MRI, she found a network of brain regions that is indeed specialized for language.

“A lot of brain areas, like motor and social systems, were already in place when language emerged during human evolution,” explains Fedorenko. “In some sense, the brain seemed fully occupied. But rather than co-opt these existing systems, the evolution of language in humans involved language carving out specific brain regions.”

Different aspects of language recruit brain regions across the left hemisphere, including Broca’s area and portions of the temporal lobe. Many believe that certain regions are involved in processing word meaning while others unpack the rules of language. Fedorenko and colleagues have however shown that the entire language network is selectively engaged in linguistic tasks, processing both the rules (syntax) and meaning (semantics) of language in the same brain areas.

Semantic Argument

Fedorenko’s lab even challenges the prevailing view that syntax is core to language processing. By gradually degrading sentence structure through local word swaps (see figure), they found that language regions still respond strongly to these degraded sentences, deciphering meaning from them, even as syntax, or combinatorial rules, disappear.

The Fedorenko lab has shown that the brain finds meaning in a sentence, even when “local” words are swapped (2, 3). But when clusters of neighboring words are scrambled (4), the brain struggles to find its meaning.

“A lot of focus in language research has been on structure-building, or building a type of hierarchical graph of the words in a sentence. But actually the language system seems optimized and driven to find rich, representational meaning in a string of words processed together,” explains Fedorenko.

Computing Language

When asked about emerging areas of research, Fedorenko points to the data structures and algorithms underlying linguistic processing. Modern computational models can perform sophisticated tasks, including translation, ever more effectively. Consider Google translate. A decade ago, the system translated one word at a time with laughable results. Now, instead of treating words as providing context for each other, the latest artificial translation systems are performing more accurately. Understanding how they resolve meaning could be very revealing.

“Maybe we can link these models to human neural data to both get insights about linguistic computations in the human brain, and maybe help improve artificial systems by making them more human-like,” says Fedorenko.

She is also trying to understand how the system breaks down, how it over-performs, and even more philosophical questions. Can a person who loses language abilities (with aphasia, for example) recover — a very relevant question given the language-processing network occupies such specific brain regions. How are some unique people able to understand 10, 15 or even more languages? Do we need words to have thoughts?

Using a battery of approaches, Fedorenko seems poised to answer some of these questions.

What is the social brain?

As part of our Ask the Brain series, Anila D’Mello, a postdoctoral fellow in John Gabrieli’s lab answers the question,”What is the social brain?”

_____

Anila D'Mello portrait
Anila D’Mello is the Simons Center for the Social Brain Postdoctoral Fellow in John Gabrieli’s lab at the McGovern Institute.

“Knock Knock.”
“Who’s there?”
“The Social Brain.”
“The Social Brain, who?”

Call and response jokes, like the “Knock Knock” joke above, leverage our common understanding of how a social interaction typically proceeds. Joke telling allows us to interact socially with others based on our shared experiences and understanding of the world. But where do these abilities “live” in the brain and how does the social brain develop?

Neuroimaging and lesion studies have identified a network of brain regions that support social interaction, including the ability to understand and partake in jokes – we refer to this as the “social brain.” This social brain network is made up of multiple regions throughout the brain that together support complex social interactions. Within this network, each region likely contributes to a specific type of social processing. The right temporo-parietal junction, for instance, is important for thinking about another person’s mental state, whereas the amygdala is important for the interpretation of emotional facial expressions and fear processing. Damage to these brain regions can have striking effects on social behaviors. One recent study even found that individuals with bigger amygdala volumes had larger and more complex social networks!

Though social interaction is such a fundamental human trait, we aren’t born with a prewired social brain.

Much of our social ability is grown and honed over time through repeated social interactions. Brain networks that support social interaction continue to specialize into adulthood. Neuroimaging work suggests that though newborn infants may have all the right brain parts to support social interaction, these regions may not yet be specialized or connected in the right way. This means that early experiences and environments can have large influences on the social brain. For instance, social neglect, especially very early in development, can have negative impacts on social behaviors and on how the social brain is wired. One prominent example is that of children raised in orphanages or institutions, who are sometimes faced with limited adult interaction or access to language. Children raised in these conditions are more likely to have social challenges including difficulties forming attachments. Prolonged lack of social stimulation also alters the social brain in these children resulting in changes in amygdala size and connections between social brain regions.

The social brain is not just a result of our environment. Genetics and biology also contribute to the social brain in ways we don’t yet fully understand. For example, individuals with autism / autistic individuals may experience difficulties with social interaction and communication. This may include challenges with things like understanding the punchline of a joke. These challenges in autism have led to the hypothesis that there may be differences in the social brain network in autism. However, despite documented behavioral differences in social tasks, there is conflicting brain imaging evidence for whether differences exist between people with and without autism in the social brain network.

Examples such as that of autism imply that the reality of the social brain is probably much more complex than the story painted here. It is likely that social interaction calls upon many different parts of the brain, even beyond those that we have termed the “social brain,” that must work in concert to support this highly complex set of behaviors. These include regions of the brain important for listening, seeing, speaking, and moving. In addition, it’s important to remember that the social brain and regions that make it up do not stand alone. Regions of the social brain also play an intimate role in language, humor, and other cognitive processes.

“Knock Knock”
“Who’s there?”
“The Social Brain”
“The Social Brain, who?”
“I just told you…didn’t you read what I wrote?”

Anila D’Mello earned her bachelor’s degree in psychology from Georgetown University in 2012, and went on to receive her PhD in Behavior, Cognition, and Neuroscience from American University in 2017. She joined the Gabrieli lab as a postdoc in 2017 and studies the neural correlates of social communication in autism.

_____

Do you have a question for The Brain? Ask it here.

Perception of musical pitch varies across cultures

People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers — say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.

This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).

“We’re finding that … there seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” says McDermott.

The findings suggest that although there is a natural mathematical relationship between the frequencies of every “C,” no matter what octave it’s played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT’s Department of Brain and Cognitive Sciences.

“It may well be that there is a biological predisposition to favor octave relationships, but it doesn’t seem to be realized unless you are exposed to music in an octave-based system,” says McDermott, who is also a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines.

The study also found that members of the Bolivian tribe, known as the Tsimane’, and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.

McDermott is the senior author of the study, which appears in the journal Current Biology on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper’s lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.

Octaves apart

Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott’s lab has performed several such studies with the participation of Tsimane’ tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.

In a study published in 2016, McDermott and his colleagues found that Westerners and Tsimane’ had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane’ listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.

Later, Jacoby and McDermott found that both Westerners and Tsimane’ are drawn to musical rhythms composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.

In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.

pitch perception experiment
Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile, runs a musical pitch perception experiment with a member of the Tsimane’ tribe of the Bolivian rainforest. Photo: Josh McDermott

Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.

Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane’ did not.

“The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane’ didn’t have any relationship to the absolute pitch of the stimulus,” Jacoby says. “That’s consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important.”

The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.

Limits of perception

The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.

People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.

The researchers found that although Tsimane’ musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane’ listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.

“It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch,” Jacoby says.

One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can’t keep up and we lose a critical cue with which to distinguish different frequencies.

“The new study contributes to the age-long debate about the interplays between culture and biological constraints in music,” says Daniel Pressnitzer, a senior research scientist at Paris Descartes University, who was not involved in the research. “This unique, precious, and extensive dataset demonstrates both striking similarities and unexpected differences in how Tsimane’ and Western listeners perceive or conceive musical pitch.”

Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane’.

Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.

“We’re finding that there are some cross-cultural similarities, but there also seems to be really striking variation in things that a lot of people would have presumed would be common across cultures and listeners,” McDermott says. “These differences in experience can lead to dissociations of different aspects of perception, giving you clues to what the parts of the perceptual system are.”

The research was funded by the James S. McDonnell Foundation, the National Institutes of Health, and the Presidential Scholar in Society and Neuroscience Program at Columbia University.

Can I rewire my brain?

As part of our Ask the Brain series, Halie Olson, a graduate student in the labs of John Gabrieli and Rebecca Saxe, pens her answer to the question,”Can I rewire my brain?”

_____

Yes, kind of, sometimes – it all depends on what you mean by “rewiring” the brain.

Halie Olson, a graduate student in the Gabrieli and Saxe labs.

If you’re asking whether you can remove all memories of your ex from your head, then no. (That’s probably for the best – just watch Eternal Sunshine of the Spotless Mind.) However, if you’re asking whether you can teach a dog new tricks – that have a physical implementation in the brain – then yes.

To embrace the analogy that “rewiring” alludes to, let’s imagine you live in an old house with outlets in less-than-optimal locations. You really want your brand-new TV to be plugged in on the far side of the living room, but there is no outlet to be found. So you call up your electrician, she pops over, and moves some wires around in the living room wall to give you a new outlet. No sweat!

Local changes in neural connectivity happen throughout the lifespan. With over 100 billion neurons and 100 trillion connections – or synapses – between these neurons in the adult human brain, it is unsurprising that some pathways end up being more important than others. When we learn something new, the connections between relevant neurons communicating with each other are strengthened. To paraphrase Donald Hebb, one of the most influential psychologists of the twentieth century, “neurons that fire together, wire together” – by forming new synapses or more efficiently connecting the ones that are already there. This ability to rewire neural connections at a local level is a key feature of the brain, enabling us to tailor our neural infrastructure to our needs.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments.

We can also see this plasticity in the brain at a larger scale. My favorite example of “rewiring” in the brain is when children learn to read. Our brains did not evolve to enable us to read – there is no built-in “reading region” that magically comes online when a child enters school. However, if you stick a proficient reader in an MRI scanner, you will see a region in the left lateral occipitotemporal sulcus (that is, the back bottom left of your cortex) that is particularly active when you read written text. Before children learn to read, this region – known as the visual word form area – is not exceptionally interested in words, but as children get acquainted with written language and start connecting letters with sounds, it becomes selective for familiar written language – no matter the font, CaPItaLIZation, or size.

Now, let’s say that you wake up in the middle of the night with a desire to move your oven and stovetop from the kitchen into your swanky new living room with the TV. You call up your electrician – she tells you this is impossible, and to stop calling her in the middle of the night.

Similarly, your brain comes with a particular infrastructure – a floorplan, let’s call it – that cannot be easily adjusted when you are an adult. Large lesions tend to have large consequences. For instance, an adult who suffers a serious stroke in their left hemisphere will likely struggle with language, a condition called aphasia. Young children’s brains, on the other hand, can sometimes rewire in profound ways. An entire half of the brain can be damaged early on with minimal functional consequences. So if you’re going for a remodel? Better do it really early.

Plasticity in our brain allows us to learn, adjust, and thrive in our environments. It also gives neuroscientists like me something to study – since clearly I would fail as an electrician.

Halie Olson earned her bachelor’s degree in neurobiology from Harvard College in 2017. She is currently a graduate student in MIT’s Department of Brain and Cognitive Sciences working with John Gabrieli and Rebecca Saxe. She studies how early life experiences and environments impact brain development, particularly in the context of reading and language, and what this means for children’s educational outcomes.

_____

Do you have a question for The Brain? Ask it here.

Hearing through the clatter

In a busy coffee shop, our eardrums are inundated with sound waves – people chatting, the clatter of cups, music playing – yet our brains somehow manage to untangle relevant sounds, like a barista announcing that our “coffee is ready,” from insignificant noise. A new McGovern Institute study sheds light on how the brain accomplishes the task of extracting meaningful sounds from background noise – findings that could one day help to build artificial hearing systems and aid development of targeted hearing prosthetics.

“These findings reveal a neural correlate of our ability to listen in noise, and at the same time demonstrate functional differentiation between different stages of auditory processing in the cortex,” explains Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of the McGovern Institute and the Center for Brains, Minds and Machines, and the senior author of the study.

The auditory cortex, a part of the brain that responds to sound, has long been known to have distinct anatomical subregions, but the role these areas play in auditory processing has remained a mystery. In their study published today in Nature Communications, McDermott and former graduate student Alex Kell, discovered that these subregions respond differently to the presence of background noise, suggesting that auditory processing occurs in steps that progressively hone in on and isolate a sound of interest.

Background check

Previous studies have shown that the primary and non-primary subregions of the auditory cortex respond to sound with different dynamics, but these studies were largely based on brain activity in response to speech or simple synthetic sounds (such as tones and clicks). Little was known about how these regions might work to subserve everyday auditory behavior.

To test these subregions under more realistic conditions, McDermott and Kell, who is now a postdoctoral researcher at Columbia University, assessed changes in human brain activity while subjects listened to natural sounds with and without background noise.

While lying in an MRI scanner, subjects listened to 30 different natural sounds, ranging from meowing cats to ringing phones, that were presented alone or embedded in real-world background noise such as heavy rain.

“When I started studying audition,” explains Kell, “I started just sitting around in my day-to-day life, just listening, and was astonished at the constant background noise that seemed to usually be filtered out by default. Most of these noises tended to be pretty stable over time, suggesting we could experimentally separate them. The project flowed from there.”

To their surprise, Kell and McDermott found that the primary and non-primary regions of the auditory cortex responded differently to natural sound depending upon whether background noise was present.

brain regions responding to sound
Primary auditory cortex (outlined in white) responses change (blue) when background noise is present, whereas non-primary activity is robust to background noise (yellow). Image: Alex Kell

They found that activity of the primary auditory cortex was altered when background noise is present, suggesting that this region has not yet differentiated between meaningful sounds and background noise. Non-primary regions, however, respond similarly to natural sounds irrespective of whether noise is present, suggesting that cortical signals generated by sound are transformed or “cleaned up” to remove background noise by the time they reach the non-primary auditory cortex.

“We were surprised by how big the difference was between primary and non-primary areas,” explained Kell, “so we ran a bunch more subjects but kept seeing the same thing. We had a ton of questions about what might be responsible for this difference, and that’s why we ended up running all these follow-up experiments.”

A general principle

Kell and McDermott went on to test whether these responses were specific to particular sounds, and discovered that the above effect remained stable no matter the source or type of sound activity. Music, speech, or a squeaky toy, all activated the non-primary cortex region similarly, whether or not background noise was present.

The authors also tested whether attention is relevant. Even when the researchers sneakily distracted subjects with a visual task in the scanner, the cortical subregions responded to meaningful sound and background noise in the same way, showing that attention is not driving this aspect of sound processing. In other words, even when we are focused on reading a book, our brain is diligently sorting the sound of our meowing cat from the patter of heavy rain outside.

Future directions

The McDermott lab is now building computational models of the so-called “noise robustness” found in the Nature Communications study and Kell is pursuing a finer-grained understanding of sound processing in his postdoctoral work at Columbia, by exploring the neural circuit mechanisms underlying this phenomenon.

By gaining a deeper understanding of how the brain processes sound, the researchers hope their work will contribute to improve diagnoses and treatment of hearing dysfunction. Such research could help to reveal the origins of listening difficulties that accompany developmental disorders or age-related hearing loss. For instance, if hearing loss results from dysfunction in sensory processing, this could be caused by abnormal noise robustness in the auditory cortex. Normal noise robustness might instead suggest that there are impairments elsewhere in the brain, for example a break down in higher executive function.

“In the future,” McDermott says, “we hope these noninvasive measures of auditory function may become valuable tools for clinical assessment.”