How the brain responds to reward is linked to socioeconomic background

MIT neuroscientists have found that the brain’s sensitivity to rewarding experiences — a critical factor in motivation and attention — can be shaped by socioeconomic conditions.

In a study of 12 to 14-year-olds whose socioeconomic status (SES) varied widely, the researchers found that children from lower SES backgrounds showed less sensitivity to reward than those from more affluent backgrounds.

Using functional magnetic resonance imaging (fMRI), the research team measured brain activity as the children played a guessing game in which they earned extra money for each correct guess. When participants from higher SES backgrounds guessed correctly, a part of the brain called the striatum, which is linked to reward, lit up much more than in children from lower SES backgrounds.

The brain imaging results also coincided with behavioral differences in how participants from lower and higher SES backgrounds responded to correct guesses. The findings suggest that lower SES circumstances may prompt the brain to adapt to the environment by dampening its response to rewards, which are often scarcer in low SES environments.

“If you’re in a highly resourced environment, with many rewards available, your brain gets tuned in a certain way. If you’re in an environment in which rewards are more scarce, then your brain accommodates the environment in which you live. Instead of being overresponsive to rewards, it seems like these brains, on average, are less responsive, because probably their environment has been less consistent in the availability of rewards,” says John Gabrieli, the Grover Hermann Professor of Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

Gabrieli and Rachel Romeo, a former MIT postdoc who is now an assistant professor in the Department of Human Development and Quantitative Methodology at the University of Maryland, are the senior authors of the study. MIT postdoc Alexandra Decker is the lead author of the paper, which appears today in the Journal of Neuroscience.

Reward response

Previous research has shown that children from lower SES backgrounds tend to perform worse on tests of attention and memory, and they are more likely to experience depression and anxiety. However, until now, few studies have looked at the possible association between SES and reward sensitivity.

In the new study, the researchers focused on a part of the brain called the striatum, which plays a significant role in reward response and decision-making. Studies in people and animal models have shown that this region becomes highly active during rewarding experiences.

To investigate potential links between reward sensitivity, the striatum, and socioeconomic status, the researchers recruited more than 100 adolescents from a range of SES backgrounds, as measured by household income and how much education their parents received.

Each of the participants underwent fMRI scanning while they played a guessing game. The participants were shown a series of numbers between 1 and 9, and before each trial, they were asked to guess whether the next number would be greater than or less than 5. They were told that for each correct guess, they would earn an extra dollar, and for each incorrect guess, they would lose 50 cents.

Unbeknownst to the participants, the game was set up to control whether the guess would be correct or incorrect. This allowed the researchers to ensure that each participant had a similar experience, which included periods of abundant rewards or few rewards. In the end, everyone ended up winning the same amount of money (in addition to a stipend that each participant received for participating in the study).

Previous work has shown that the brain appears to track the rate of rewards available. When rewards are abundant, people or animals tend to respond more quickly because they don’t want to miss out on the many available rewards. The researchers saw that in this study as well: When participants were in a period when most of their responses were correct, they tended to respond more quickly.

“If your brain is telling you there’s a really high chance that you’re going to receive a reward in this environment, it’s going to motivate you to collect rewards, because if you don’t act, you’re missing out on a lot of rewards,” Decker says.

Brain scans showed that the degree of activation in the striatum appeared to track fluctuations in the rate of rewards across time, which the researchers think could act as a motivational signal that there are many rewards to collect. The striatum lit up more during periods in which rewards were abundant and less during periods in which rewards were scarce. However, this effect was less pronounced in the children from lower SES backgrounds, suggesting their brains were less attuned to fluctuations in the rate of reward over time.

The researchers also found that during periods of scarce rewards, participants tended to take longer to respond after a correct guess, another phenomenon that has been shown before. It’s unknown exactly why this happens, but two possible explanations are that people are savoring their reward or that they are pausing to update the reward rate. However, once again, this effect was less pronounced in the children from lower SES backgrounds — that is, they did not pause as long after a correct guess during the scarce-reward periods.

“There was a reduced response to reward, which is really striking. It may be that if you’re from a lower SES environment, you’re not as hopeful that the next response will gain similar benefits, because you may have a less reliable environment for earning rewards,” Gabrieli says. “It just points out the power of the environment. In these adolescents, it’s shaping their psychological and brain response to reward opportunity.”

Environmental effects

The fMRI scans performed during the study also revealed that children from lower SES backgrounds showed less activation in the striatum when they guessed correctly, suggesting that their brains have a dampened response to reward.

The researchers hypothesize that these differences in reward sensitivity may have evolved over time, in response to the children’s environments.

“Socioeconomic status is associated with the degree to which you experience rewards over the course of your lifetime,” Decker says. “So, it’s possible that receiving a lot of rewards perhaps reinforces behaviors that make you receive more rewards, and somehow this tunes the brain to be more responsive to rewards. Whereas if you are in an environment where you receive fewer rewards, your brain might become, over time, less attuned to them.”

The study also points out the value of recruiting study subjects from a range of SES backgrounds, which takes more effort but yields important results, the researchers say.

“Historically, many studies have involved the easiest people to recruit, who tend to be people who come from advantaged environments. If we don’t make efforts to recruit diverse pools of participants, we almost always end up with children and adults who come from high-income, high-education environments,” Gabrieli says. “Until recently, we did not realize that principles of brain development vary in relation to the environment in which one grows up, and there was very little evidence about the influence of SES.”

The research was funded by the William and Flora Hewlett Foundation and a Natural Sciences and Engineering Research Council of Canada Postdoctoral Fellowship.

Study reveals a universal pattern of brain wave frequencies

Throughout the brain’s cortex, neurons are arranged in six distinctive layers, which can be readily seen with a microscope. A team of MIT and Vanderbilt University neuroscientists has now found that these layers also show distinct patterns of electrical activity, which are consistent over many brain regions and across several animal species, including humans.

The researchers found that in the topmost layers, neuron activity is dominated by rapid oscillations known as gamma waves. In the deeper layers, slower oscillations called alpha and beta waves predominate. The universality of these patterns suggests that these oscillations are likely playing an important role across the brain, the researchers say.

“When you see something that consistent and ubiquitous across cortex, it’s playing a very fundamental role in what the cortex does,” says Earl Miller, the Picower Professor of Neuroscience, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the new study.

Imbalances in how these oscillations interact with each other may be involved in brain disorders such as attention deficit hyperactivity disorder, the researchers say.

“Overly synchronous neural activity is known to play a role in epilepsy, and now we suspect that different pathologies of synchrony may contribute to many brain disorders, including disorders of perception, attention, memory, and motor control. In an orchestra, one instrument played out of synchrony with the rest can disrupt the coherence of the entire piece of music,” says Robert Desimone, director of MIT’s McGovern Institute for Brain Research and one of the senior authors of the study.

André Bastos, an assistant professor of psychology at Vanderbilt University, is also a senior author of the open-access paper, which appears today in Nature Neuroscience. The lead authors of the paper are MIT research scientist Diego Mendoza-Halliday and MIT postdoc Alex Major.

Layers of activity

The human brain contains billions of neurons, each of which has its own electrical firing patterns. Together, groups of neurons with similar patterns generate oscillations of electrical activity, or brain waves, which can have different frequencies. Miller’s lab has previously shown that high-frequency gamma rhythms are associated with encoding and retrieving sensory information, while low-frequency beta rhythms act as a control mechanism that determines which information is read out from working memory.

His lab has also found that in certain parts of the prefrontal cortex, different brain layers show distinctive patterns of oscillation: faster oscillation at the surface and slower oscillation in the deep layers. One study, led by Bastos when he was a postdoc in Miller’s lab, showed that as animals performed working memory tasks, lower-frequency rhythms generated in deeper layers regulated the higher-frequency gamma rhythms generated in the superficial layers.

In addition to working memory, the brain’s cortex also is the seat of thought, planning, and high-level processing of emotion and sensory information. Throughout the regions involved in these functions, neurons are arranged in six layers, and each layer has its own distinctive combination of cell types and connections with other brain areas.

“The cortex is organized anatomically into six layers, no matter whether you look at mice or humans or any mammalian species, and this pattern is present in all cortical areas within each species,” Mendoza-Halliday says. “Unfortunately, a lot of studies of brain activity have been ignoring those layers because when you record the activity of neurons, it’s been difficult to understand where they are in the context of those layers.”

In the new paper, the researchers wanted to explore whether the layered oscillation pattern they had seen in the prefrontal cortex is more widespread, occurring across different parts of the cortex and across species.

Using a combination of data acquired in Miller’s lab, Desimone’s lab, and labs from collaborators at Vanderbilt, the Netherlands Institute for Neuroscience, and the University of Western Ontario, the researchers were able to analyze 14 different areas of the cortex, from four mammalian species. This data included recordings of electrical activity from three human patients who had electrodes inserted in the brain as part of a surgical procedure they were undergoing.

Recording from individual cortical layers has been difficult in the past, because each layer is less than a millimeter thick, so it’s hard to know which layer an electrode is recording from. For this study, electrical activity was recorded using special electrodes that record from all of the layers at once, then feed the data into a new computational algorithm the authors designed, termed FLIP (frequency-based layer identification procedure). This algorithm can determine which layer each signal came from.

“More recent technology allows recording of all layers of cortex simultaneously. This paints a broader perspective of microcircuitry and allowed us to observe this layered pattern,” Major says. “This work is exciting because it is both informative of a fundamental microcircuit pattern and provides a robust new technique for studying the brain. It doesn’t matter if the brain is performing a task or at rest and can be observed in as little as five to 10 seconds.”

Across all species, in each region studied, the researchers found the same layered activity pattern.

“We did a mass analysis of all the data to see if we could find the same pattern in all areas of the cortex, and voilà, it was everywhere. That was a real indication that what had previously been seen in a couple of areas was representing a fundamental mechanism across the cortex,” Mendoza-Halliday says.

Maintaining balance

The findings support a model that Miller’s lab has previously put forth, which proposes that the brain’s spatial organization helps it to incorporate new information, which carried by high-frequency oscillations, into existing memories and brain processes, which are maintained by low-frequency oscillations. As information passes from layer to layer, input can be incorporated as needed to help the brain perform particular tasks such as baking a new cookie recipe or remembering a phone number.

“The consequence of a laminar separation of these frequencies, as we observed, may be to allow superficial layers to represent external sensory information with faster frequencies, and for deep layers to represent internal cognitive states with slower frequencies,” Bastos says. “The high-level implication is that the cortex has multiple mechanisms involving both anatomy and oscillations to separate ‘external’ from ‘internal’ information.”

Under this theory, imbalances between high- and low-frequency oscillations can lead to either attention deficits such as ADHD, when the higher frequencies dominate and too much sensory information gets in, or delusional disorders such as schizophrenia, when the low frequency oscillations are too strong and not enough sensory information gets in.

“The proper balance between the top-down control signals and the bottom-up sensory signals is important for everything the cortex does,” Miller says. “When the balance goes awry, you get a wide variety of neuropsychiatric disorders.”

The researchers are now exploring whether measuring these oscillations could help to diagnose these types of disorders. They are also investigating whether rebalancing the oscillations could alter behavior — an approach that could one day be used to treat attention deficits or other neurological disorders, the researchers say.

The researchers also hope to work with other labs to characterize the layered oscillation patterns in more detail across different brain regions.

“Our hope is that with enough of that standardized reporting, we will start to see common patterns of activity across different areas or functions that might reveal a common mechanism for computation that can be used for motor outputs, for vision, for memory and attention, et cetera,” Mendoza-Halliday says.

The research was funded by the U.S. Office of Naval Research, the U.S. National Institutes of Health, the U.S. National Eye Institute, the U.S. National Institute of Mental Health, the Picower Institute, a Simons Center for the Social Brain Postdoctoral Fellowship, and a Canadian Institutes of Health Postdoctoral Fellowship.

Calling neurons to attention

The world assaults our senses, exposing us to more noise and color and scents and sensations than we can fully comprehend. Our brains keep us tuned in to what’s important, letting less relevant sights and sounds fade into the background while we focus on the most salient features of our surroundings. Now, scientists at MIT’s McGovern Institute have a better understanding of how the brain manages this critical task of directing our attention.

In the January 15, 2023, issue of the journal Neuron, a team led by Diego Mendoza-Halliday, a research scientist in McGovern Institute Director Robert Desimone’s lab, reports on a group of neurons in the brain’s prefrontal cortex that are critical for directing an animal’s visual attention. Their findings not only demonstrate this brain region’s important role in guiding attention, but also help establish attention as a function that is distinct from other cognitive functions, such as short-term memory, in the brain.

Attention and working memory

Mendoza-Halliday, who is now an assistant professor at the University of Pittsburgh, explains that attention has a close relationship to working memory, which the brain uses to temporarily store information after our senses take it in. The two brain functions strongly influence one another: We’re more likely to remember something if we pay attention to it, and paying attention to certain features of our environment may involve representing those features in our working memory. For example, he explains, both attention and working memory are called on when searching for a triangular red keychain on a cluttered desk: “What my brain does is it remembers that my keyholder is red and it’s a triangle, and then builds a working memory representation and uses it as a search template. So now everything that is red and everything that is a triangle receives preferential processing, or is attended to.”

Working memory and attention are so closely associated that some neuroscientists have proposed that the brain calls on the same neural mechanisms to create them. “This has led to the belief that maybe attention and working memory are just two sides of the same coin—that they’re basically the same function in different modes,” Mendoza-Halliday says. His team’s findings, however, say otherwise.

Circuit manipulation

To study the origins of attention in the brain, Mendoza-Halliday and colleagues trained monkeys to focus their attention on a visual feature that matches a cue they have seen before. After seeing a set of dots move across the screen, they must call on their working memory to remember the direction of that movement for a few seconds while the screen goes blank. Then the experimenters present the animals with more moving dots, this time traveling in multiple directions. By focusing on the dots moving in the same direction as the first set they saw, the monkeys are able to recognize when those dots briefly accelerate. Reporting on the speed change earns the animals a reward.

While the monkeys performed this task, the researchers monitored cells in several brain regions, including the prefrontal cortex, which Desimone’s team has proposed plays a role in directing attention. The activity patterns they recorded suggested that distinct groups of cells participated in the attention and working memory aspects of the task.

To better understand those cells’ roles, the researchers manipulated their activity. They used optogenetics, an approach in which a light-sensitive protein is introduced into neurons so that they can be switched on or off with a pulse of light. Desimone’s lab, in collaboration with Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a member of the McGovern Institute, pioneered the use of optogenetics in primates. “Optogenetics allows us to distinguish between correlation and causality in neural circuits,” says Desimone, the Doris and Don Berkey Professor of Neuroscience at MIT.  “If we turn off a circuit using optogenetics, and the animal can no longer perform the task, that is good evidence for a causal role of the circuit,” says Desimone, who is also a professor of brain and cognitive sciences at MIT.

Using this optogenetic method, they switched off neurons in a specific portion of the brain’s lateral prefrontal cortex for a few hundred milliseconds at a time as the monkeys performed their dot-tracking task. The researchers found that they could switch off signaling from the lateral prefrontal cortex early, when the monkeys needed their working memory but had no dots to attend to, without interfering with the animals’ ability to complete the task. But when they blocked signaling when the monkeys needed to focus their attention, the animals performed poorly.

The team also monitored activity in the brain visual’s cortex during the moving-dot task. When the lateral prefrontal cortex was shut off, neurons in connected visual areas showed less heightened reactivity to movement in the direction the monkey was attending to. Mendoza-Halliday says this suggests that cells in the lateral prefrontal cortex are important for telling sensory-processing circuits what visual features to pay attention to.

The discovery that at least part of the brain’s lateral prefrontal cortex is critical for attention but not for working memory offers a new view of the relationship between the two. “It is a physiological demonstration that working memory and attention cannot be the same function, since they rely on partially separate neuronal populations and neural mechanisms,” Mendoza-Halliday says.

Mapping healthy cells’ connections in the brain

Portrait of scientist in a suit and tie.
McGovern Institute Principal Research Scientist Ian Wickersham. Photo: Caitlin Cunningham

A new tool developed by researchers at MIT’s McGovern Institute gives neuroscientists the power to find connected neurons within the brain’s tangled network of cells, and then follow or manipulate those neurons over a prolonged period. Its development, led by Principal Research Scientist Ian Wickersham, transforms a powerful tool for exploring the anatomy of the brain into a sophisticated system for studying brain function.

Wickersham and colleagues have designed their system to enable long-term analysis and experiments on groups of neurons that reach through the brain to signal to select groups of cells. It is described in the January 11, 2024, issue of the journal Nature Neuroscience. “This second-generation system will allow imaging, recording, and control of identified networks of synaptically-connected neurons in the context of behavioral studies and other experimental designs lasting weeks, months, or years,” Wickersham says.

The system builds on an approach to anatomical tracing that Wickersham developed in 2007, as a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies. Its key is a modified version of a rabies virus, whose natural—and deadly—life cycle involves traveling through the brain’s neural network.

Viral tracing

The rabies virus is useful for tracing neuronal connections because once it has infected the nervous system, it spreads through the neural network by co-opting the very junctions that neurons use to communicate with one another. Hopping across those junctions, or synapses, the virus can pass from cell to cell. Traveling in the opposite direction of neuronal signals, it reaches the brain, where it continues to spread.

Labeled illustration of rabies virus
Simplified illustration of rabies virus. Image: istockphoto

To use the rabies virus to identify specific connections within the brain, Wickersham modified it to limit its spread. His original tracing system uses a rabies virus that lacks an essential gene. When researchers deliver the modified virus to the neurons whose connections they want to map, they also instruct those neurons to make the protein encoded by the virus’s missing gene. That allows the virus to replicate and travel across the synapses that link an infected cell to others in the network. Once it is inside a new cell, the virus is deprived of the critical protein and can go no farther.

Under a microscope, a fluorescent protein delivered by the modified virus lights up, exposing infected cells: those to which the virus was originally delivered as well as any neurons that send it direct inputs. Because the virus crosses only one synapse after leaving the cell it originally infected, the technique is known as monosynaptic tracing.

Labs around the world now use this method to identify which brain cells send signals to a particular set of neurons. But while the virus used in the original system can’t spread through the brain like a natural rabies virus, it still sickens the cells it does infect. Infected cells usually die in about two weeks, and that has limited scientists’ ability to conduct further studies of the cells whose connections they trace. “If you want to then go on to manipulate those connected populations of cells, you have a very short time window,” Wickersham says.

Reducing toxicity

To keep cells healthy after monosynaptic tracing, Wickersham, postdoctoral researcher Lei Jin, and colleagues devised a new approach. They began by deleting a second gene from the modified virus they use to label cells. That gene encodes an enzyme the rabies virus needs to produce the proteins encoded in its own genome. As with the original system, neurons are instructed to create the virus’s missing proteins, equipping the virus to replicate inside those cells. In this case, this is done in mice that have been genetically modified to produce the second deleted viral gene in specific sets of neurons.

Brightly colored neurons under a microscope.
The initially-infected “starter cells” at the injection site in the substantia nigra, pars compacta. Blue: tyrosine hydroxylase immunostaining, showing dopaminergic cells; green: enhanced green fluorescent protein showing neurons able to be initially infected with the rabies virus; red: the red fluorescent protein tdTomato, reporting the presence of the second-generation rabies virus. Image: Ian Wickersham, Lei Jin

To limit toxicity, Wickersham and his team built in a control that allows researchers to switch off cells’ production of viral proteins once the virus has had time to replicate and begin its spread to connected neurons. With those proteins no longer available to support the viral life cycle, the tracing tool is rendered virtually harmless. After following mice for up to 10 weeks, the researchers detected minimal toxicity in neurons where monosynaptic tracing was initiated. And, Wickersham says, “as far as we can tell, the trans-synaptically labeled cells are completely unscathed.”

Neurons illuminated in red under a microscope
Transsynaptically labeled cells in the striatum, which provides input to the dopaminergic cells of the substantia nigra. These cells show no morphological abnormalities or any other indication of toxicity five weeks after the rabies virus injection. Image: Ian Wickersham, Lei Jin

That means neuroscientists can now pair monosynaptic tracing with many of neuroscience’s most powerful tools for functional studies. To facilitate those experiments, Wickersham’s team encoded enzymes called recombinases into their connection-tracing rabies virus, which enables the introduction of genetically encoded research tools to targeted cells. After tracing cells’ connections, researchers will be able to manipulate those neurons, follow their activity, and explore their contributions to animal behavior. Such experiments will deepen scientists’ understanding of the inputs select groups of neurons receive from elsewhere in the brain, as well as the cells that are sending those signals.

Jin, who is now a principal investigator at Lingang Laboratory in Shanghai, says colleagues are already eager to begin working with the new non-toxic tracing system. Meanwhile, Wickersham’s group has already started experimenting with a third-generation system, which they hope will improve efficiency and be even more powerful.

The promise of gene therapy

Portrait of Bob Desimone wearing a suit and tie.
McGovern Institute Director Robert Desimone. Photo: Steph Stevens

As we start 2024, I hope you can join me in celebrating a historic recent advance: the FDA approval of Casgevy, a bold new treatment for devastating sickle cell disease and the world’s first approved CRISPR gene therapy.

Developed by Vertex Pharmaceuticals and CRISPR Therapeutics, we are proud to share that this pioneering therapy licenses the CRISPR discoveries of McGovern scientist and Poitras Professor of Neuroscience Feng Zhang.

It is amazing to think that Feng’s breakthrough work adapting CRISPR-Cas9 for genome editing in eukaryotic cells was published only 11 years ago today in Science.

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

McGovern scientists are determined to maintain the momentum!

 

Incredibly, CRISPR-Cas9 rapidly transitioned from proof-of-concept experiments to an approved treatment in just over a decade.

Our labs are creating new gene therapies that are already in clinical trials or preparing to enroll patients in trials. For instance, Feng Zhang’s team has developed therapies currently in clinical trials for lymphoblastic leukemia and beta thalassemia, while another McGovern researcher, Guoping Feng, the Poitras Professor of Brain and Cognitive Sciences at MIT, has made advancements that lay the groundwork for a new gene therapy to treat a severe form of autism spectrum disorder. It is expected to enter clinical trials later this year. Moreover, McGovern fellows Omar Abudayyeh and Jonathan Gootenberg created programmable genomic tools that are now licensed for use in monogenic liver diseases and autoimmune disorders.

These exciting innovations stem from your steadfast support of our high-risk, high-reward research. Your generosity is enabling our scientists to pursue basic research in other areas with potential therapeutic applications in the future, such as mechanisms of pain, addiction, the connections between the brain and gut, the workings of memory and attention, and the bi-directional influence of artificial intelligence on brain research. All of this fundamental research is being fueled by major new advances in technology, many of them developed here.

As we enter a new year filled with anticipation following our inaugural gene therapy, I want to express my heartfelt gratitude for your invaluable support in advancing our research programs. Your role in pushing our research to new heights is valued by all faculty, students, and researchers at the McGovern Institute. We can’t wait to share our continued progress with you.

Thank you again for partnering with us to make great scientific achievements possible.

With appreciation and best wishes,

Robert Desimone, PhD
Director, McGovern Institute
Doris and Don Berkey Professor of Neuroscience, MIT

Complex, unfamiliar sentences make the brain’s language network work harder

With help from an artificial language network, MIT neuroscientists have discovered what kind of sentences are most likely to fire up the brain’s key language processing centers.

The new study reveals that sentences that are more complex, either because of unusual grammar or unexpected meaning, generate stronger responses in these language processing centers. Sentences that are very straightforward barely engage these regions, and nonsensical sequences of words don’t do much for them either.

For example, the researchers found this brain network was most active when reading unusual sentences such as “Buy sell signals remains a particular,” taken from a publicly available language dataset called C4. However, it went quiet when reading something very straightforward, such as “We were sitting on the couch.”

“The input has to be language-like enough to engage the system,” says Evelina Fedorenko, Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “And then within that space, if things are really easy to process, then you don’t have much of a response. But if things get difficult, or surprising, if there’s an unusual construction or an unusual set of words that you’re maybe not very familiar with, then the network has to work harder.”

Fedorenko is the senior author of the study, which appears today in Nature Human Behavior. MIT graduate student Greta Tuckute is the lead author of the paper.

Processing language

In this study, the researchers focused on language-processing regions found in the left hemisphere of the brain, which includes Broca’s area as well as other parts of the left frontal and temporal lobes of the brain.

“This language network is highly selective to language, but it’s been harder to actually figure out what is going on in these language regions,” Tuckute says. “We wanted to discover what kinds of sentences, what kinds of linguistic input, drive the left hemisphere language network.”

The researchers began by compiling a set of 1,000 sentences taken from a wide variety of sources — fiction, transcriptions of spoken words, web text, and scientific articles, among many others.

Five human participants read each of the sentences while the researchers measured their language network activity using functional magnetic resonance imaging (fMRI). The researchers then fed those same 1,000 sentences into a large language model — a model similar to ChatGPT, which learns to generate and understand language from predicting the next word in huge amounts of text — and measured the activation patterns of the model in response to each sentence.

Once they had all of those data, the researchers trained a mapping model, known as an “encoding model,” which relates the activation patterns seen in the human brain with those observed in the artificial language model. Once trained, the model could predict how the human language network would respond to any new sentence based on how the artificial language network responded to these 1,000 sentences.

The researchers then used the encoding model to identify 500 new sentences that would generate maximal activity in the human brain (the “drive” sentences), as well as sentences that would elicit minimal activity in the brain’s language network (the “suppress” sentences).

In a group of three new human participants, the researchers found these new sentences did indeed drive and suppress brain activity as predicted.

“This ‘closed-loop’ modulation of brain activity during language processing is novel,” Tuckute says. “Our study shows that the model we’re using (that maps between language-model activations and brain responses) is accurate enough to do this. This is the first demonstration of this approach in brain areas implicated in higher-level cognition, such as the language network.”

Linguistic complexity

To figure out what made certain sentences drive activity more than others, the researchers analyzed the sentences based on 11 different linguistic properties, including grammaticality, plausibility, emotional valence (positive or negative), and how easy it is to visualize the sentence content.

For each of those properties, the researchers asked participants from crowd-sourcing platforms to rate the sentences. They also used a computational technique to quantify each sentence’s “surprisal,” or how uncommon it is compared to other sentences.

This analysis revealed that sentences with higher surprisal generate higher responses in the brain. This is consistent with previous studies showing people have more difficulty processing sentences with higher surprisal, the researchers say.

Another linguistic property that correlated with the language network’s responses was linguistic complexity, which is measured by how much a sentence adheres to the rules of English grammar and how plausible it is, meaning how much sense the content makes, apart from the grammar.

Sentences at either end of the spectrum — either extremely simple, or so complex that they make no sense at all — evoked very little activation in the language network. The largest responses came from sentences that make some sense but require work to figure them out, such as “Jiffy Lube of — of therapies, yes,” which came from the Corpus of Contemporary American English dataset.

“We found that the sentences that elicit the highest brain response have a weird grammatical thing and/or a weird meaning,” Fedorenko says. “There’s something slightly unusual about these sentences.”

The researchers now plan to see if they can extend these findings in speakers of languages other than English. They also hope to explore what type of stimuli may activate language processing regions in the brain’s right hemisphere.

The research was funded by an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, the MIT-IBM Watson AI Lab, the National Institutes of Health, the McGovern Institute, the Simons Center for the Social Brain, and MIT’s Department of Brain and Cognitive Sciences.

Deep neural networks show promise as models of human hearing

Computational models that mimic the structure and function of the human auditory system could help researchers design better hearing aids, cochlear implants, and brain-machine interfaces. A new study from MIT has found that modern computational models derived from machine learning are moving closer to this goal.

In the largest study yet of deep neural networks that have been trained to perform auditory tasks, the MIT team showed that most of these models generate internal representations that share properties of representations seen in the human brain when people are listening to the same sounds.

The study also offers insight into how to best train this type of model: The researchers found that models trained on auditory input including background noise more closely mimic the activation patterns of the human auditory cortex.

“What sets this study apart is it is the most comprehensive comparison of these kinds of models to the auditory system so far. The study suggests that models that are derived from machine learning are a step in the right direction, and it gives us some clues as to what tends to make them better models of the brain,” says Josh McDermott, an associate professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

MIT graduate student Greta Tuckute and Jenelle Feather PhD ’22 are the lead authors of the open-access paper, which appears today in PLOS Biology.

Models of hearing

Deep neural networks are computational models that consists of many layers of information-processing units that can be trained on huge volumes of data to perform specific tasks. This type of model has become widely used in many applications, and neuroscientists have begun to explore the possibility that these systems can also be used to describe how the human brain performs certain tasks.

“These models that are built with machine learning are able to mediate behaviors on a scale that really wasn’t possible with previous types of models, and that has led to interest in whether or not the representations in the models might capture things that are happening in the brain,” Tuckute says.

When a neural network is performing a task, its processing units generate activation patterns in response to each audio input it receives, such as a word or other type of sound. Those model representations of the input can be compared to the activation patterns seen in fMRI brain scans of people listening to the same input.

In 2018, McDermott and then-graduate student Alexander Kell reported that when they trained a neural network to perform auditory tasks (such as recognizing words from an audio signal), the internal representations generated by the model showed similarity to those seen in fMRI scans of people listening to the same sounds.

Since then, these types of models have become widely used, so McDermott’s research group set out to evaluate a larger set of models, to see if the ability to approximate the neural representations seen in the human brain is a general trait of these models.

For this study, the researchers analyzed nine publicly available deep neural network models that had been trained to perform auditory tasks, and they also created 14 models of their own, based on two different architectures. Most of these models were trained to perform a single task — recognizing words, identifying the speaker, recognizing environmental sounds, and identifying musical genre — while two of them were trained to perform multiple tasks.

When the researchers presented these models with natural sounds that had been used as stimuli in human fMRI experiments, they found that the internal model representations tended to exhibit similarity with those generated by the human brain. The models whose representations were most similar to those seen in the brain were models that had been trained on more than one task and had been trained on auditory input that included background noise.

“If you train models in noise, they give better brain predictions than if you don’t, which is intuitively reasonable because a lot of real-world hearing involves hearing in noise, and that’s plausibly something the auditory system is adapted to,” Feather says.

Hierarchical processing

The new study also supports the idea that the human auditory cortex has some degree of hierarchical organization, in which processing is divided into stages that support distinct computational functions. As in the 2018 study, the researchers found that representations generated in earlier stages of the model most closely resemble those seen in the primary auditory cortex, while representations generated in later model stages more closely resemble those generated in brain regions beyond the primary cortex.

Additionally, the researchers found that models that had been trained on different tasks were better at replicating different aspects of audition. For example, models trained on a speech-related task more closely resembled speech-selective areas.

“Even though the model has seen the exact same training data and the architecture is the same, when you optimize for one particular task, you can see that it selectively explains specific tuning properties in the brain,” Tuckute says.

McDermott’s lab now plans to make use of their findings to try to develop models that are even more successful at reproducing human brain responses. In addition to helping scientists learn more about how the brain may be organized, such models could also be used to help develop better hearing aids, cochlear implants, and brain-machine interfaces.

“A goal of our field is to end up with a computer model that can predict brain responses and behavior. We think that if we are successful in reaching that goal, it will open a lot of doors,” McDermott says.

The research was funded by the National Institutes of Health, an Amazon Fellowship from the Science Hub, an International Doctoral Fellowship from the American Association of University Women, an MIT Friends of McGovern Institute Fellowship, a fellowship from the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT, and a Department of Energy Computational Science Graduate Fellowship.

Season’s Greetings from the McGovern Institute

This year’s holiday greeting (video above) was inspired by research conducted in John Gabrieli’s lab, which found that practicing mindfulness reduced children’s stress levels and negative emotions during the pandemic. These findings contribute to a growing body of evidence that practicing mindfulness can change patterns of brain activity associated with emotions and mental health.

Coloring is one form of mindfulness, or focusing awareness on the present. Visit our postcard collection to download and color your own brain-themed postcards and may the spirit of mindfulness bring you peace in the year ahead!

Video credits:
Joseph Laney (illustration)
JR Narrows, Space Lute (sound design)
Jacob Pryor (animation)

A mindful McGovern community

Mindfulness is the practice of maintaining a state of complete awareness of one’s thoughts, emotions, or experiences on a moment-to-moment basis. McGovern researchers have shown that practicing mindfulness reduces anxiety and supports emotional resilience.

In a survey distributed to the McGovern Institute community, 57% of the 74 researchers, faculty, and staff who responded, said that they practice mindfulness as a way to reduce anxiety and stress.

Here are a few of their stories.

Fernanda De La Torre

Portrait of a smiling woman leaning back against a railing.
MIT graduate student Fernanda De La Torre. Photo: Steph Stevens

Fernanda De La Torre is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where she is advised by Josh McDermott.

Originally from Mexico, De La Torre took an unconventional path to her education in the United States, where she completed her undergraduate studies in computer science and math at Kansas State University. In 2019, she came to MIT as a postbaccalaureate student in the lab of Tomaso Poggio where she began working on deep-learning theory, an area of machine learning focused on how artificial neural networks modeled on the brain can learn to recognize patterns and learn.

A recent recipient of the prestigious Paul and Daisy Soros Fellowship for New Americans, De La Torre now studies multisensory integration during speech perception using deep learning models in Josh McDermott’s lab.

What kind of mindfulness do you practice, how often, and why?

Metta meditation is the type of meditation I come back to the most. I practice 2-3 times per week. Sometimes by joining Nikki Mirghafori’s Zoom calls or listening to her and other teachers’ recordings on AudioDharma. I practice because when I observe the patterns of my thoughts, I remember the importance of compassion, including self-compassion. In my experience, I find metta meditation is a wonderful way to cultivate the two: observation and compassion. 

When and why did you start practicing mindfulness?

My first meditation practice was as a first-year post-baccalaureate student here at BCS. Gal Raz (also pictured above) carried a lot of peace and attributed it to meditation; this sparked my curiosity. I started practicing more frequently last summer, after realizing my mental health was not in a good place.

How does mindfulness benefit your research at MIT?

This is hard to answer because I think the benefits of meditation are hard to measure. I find that meditation helps me stay centered and healthy, which can indirectly help the research I do. More directly, some of my initial grad school pursuits were fueled by thoughts during meditation but I ended up feeling that a lot of these concepts are hard to explore using non-philosophical approaches. So I think meditation is mainly a practice that helps my health, my relationships with others, and my relationship with work (this last one I find most challenging and personally unresolved). 

Adam Eisen

MIT graduate student Adam Eisen.

Adam Eisen is a graduate student in MIT’s Department of Brain and Cognitive Sciences, where he is co-advised by Ila Fiete (McGovern Institute) and Earl Miller (Picower Institute).

Eisen completed his undergraduate degree in Applied Mathematics & Computer Engineering at Queen’s University in Toronto, Canada. Prior to joining MIT, Eisen built computer vision algorithms at the solar aerial inspection company Heliolytics and worked on developing machine learning tools to predict disease outcomes from genetics at The Hospital for Sick Children.

Today, in the Fiete and Miller labs, Eisen develops tools for analyzing the flow of neural activity, and applies them to understand changes in neural states (such as from consciousness to anesthetic-induced unconsciousness).

What kind of mindfulness do you practice, how often, and why?

I mostly practice simple sitting meditation centered on awareness of senses and breathing. On a good week, I meditate about 3-5 times. The reason I practice are the benefits to my general experience of living. Whenever I’m in a prolonged period of consistent meditation, I’m shocked by how much more awareness I have about thoughts, feelings and sensations that are arising in my mind throughout the day. I’m also amazed by how much easier it is to watch my mind and body react to the context around me, without slipping into the usual patterns and habits. I also find mindful benefits in doing yoga, running and playing music, but the core is really centered on meditation practice.

When and why did you start practicing mindfulness?

I’ve been interested in mindfulness and meditation since undergrad as a path to investigating the nature of mind and thought – an interest which also led me into my PhD. I started practicing meditation more seriously at the start of the pandemic to get more first hand experience with what I had been learning about. I find meditation is one of those things where knowledge and theory can support the practice, but without the experiential component it’s very hard to really start to build an understanding of the core concepts at play.

How does mindfulness benefit your research at MIT?

Mindfulness has definitely informed the kinds of things I’m interested in studying and the questions I’d like to ask – largely in relation to the nature of conscious awareness and the flow of thoughts. Outside of that, I’d like to think that mindfulness benefits my general well-being and spiritual balance, which enables me to do better research.

 

Sugandha Sharma

Woman clasping hands in a yoga pose, looking directly into the camera.
MIT graduate student Sugandha Sharma. Photo: Steph Stevens

Sugandha (Su) Sharma is a graduate student in MIT’s Department of Brain and Cognitive Sciences (BCS), where she is co-advised by Ila Fiete (McGovern Institute) and Josh Tenenbaum (BCS).

Prior to joining MIT, she studied theoretical neuroscience at the University of Waterloo where she built neural models of context dependent decision making in the prefrontal cortex and spiking neuron models of bayesian inference, based on online learning of priors from life experience.

Today, in the Fiete and Tenenbaum labs, she studies the computational and theoretical principles underlying cognition and intelligence in the human brain.  She is currently exploring the coding principles in the hippocampal circuits implicated in spatial navigation, and their role in cognitive computations like structure learning and relational reasoning.

When did you start practicing mindfulness?

When I first learned to meditate, I was challenged to practice it every day for at least 3 months in a row. I took up the challenge, and by the end of it, the results were profound. My whole perspective towards life changed. It made me more empathetic — I could step in other people’s shoes and be mindful of their situations and feelings;  my focus shifted from myself to the big picture — it made me realize how insignificant my life was on the grand scale of the universe, and how it was worthless to be caught up in small things that I was usually worrying about. It somehow also brought selflessness to me. This experience hooked me to meditation and mindfulness for life!

What kind of mindfulness do you practice and why?

I practice mindfulness because it brings awareness. It helps me to be aware of myself, my thoughts, my actions, and my surroundings at each moment in my life, thus helping me stay in and enjoy the present moment. Awareness is of utmost importance since an aware mind always does the right thing. Imagine that you are angry, in that moment you have lost awareness of yourself. The moment you become aware of yourself; anger goes away. This is why sometimes counting helps to combat anger. If you start counting, that gives you time to think and become aware of yourself and your actions.

Meditating — sitting with my eyes closed and just observing (being aware of) my thoughts — is a yogic technique that helps me clear the noise in my mind and calm it down making it easier for me to be mindful not only while meditating, but also in general after I am done meditating. Over time, the thoughts vanish, and the mind becomes blank (noiseless). For this reason, practicing meditation regularly makes it easier for me to be mindful all the time.

An added advantage of yoga and meditation is that it helps combat stress by relaxing the mind and body. Many people don’t know what to do when they are stressed, but I am grateful to have this toolkit of yoga and meditation to deal with stressful situations in my life. They help me calm my mind in stressful situations and ensure that instead of reacting to a situation, I instead act mindfully and appropriately to make it right.

K. Lisa Yang Postbaccalaureate Program names new scholars

Funded by philanthropist Lisa Yang, the K. Lisa Yang Postbaccalaureate Scholar Program provides two years of paid laboratory experience, mentorship, and education to recent college graduates from backgrounds underrepresented in neuroscience. This year, two young researchers in McGovern Institute labs, Joseph Itiat and Sam Merrow, are the recipients of the Yang postbac program.

Itiat moved to the United States from Nigeria in 2019 to pursue a degree in psychology and cognitive neuroscience at Temple University. Today, he is a Yang postbac in John Gabrieli’s lab studying the relationship between learning and value processes and their influence on future-oriented decision-making. Ultimately, Itiat hopes to develop models that map the underlying mechanisms driving these processes.

“Being African, with limited research experience and little representation in the domain of neuroscience research,” Itiat says, “I chose to pursue a postbaccalaureate
research program to prepare me for a top graduate school and a career in cognitive neuroscience.”

Merrow first fell in love with science while working at the Barrow Neurological Institute in Arizona during high school. After graduating from Simmons University in Boston, Massachusetts, Merrow joined Guoping Feng’s lab as a Yang postbac to pursue research on glial cells and brain disorders. “As a queer, nonbinary, LatinX person, I have not met anyone like me in my field, nor have I had role models that hold a similar identity to myself,” says Merrow.

“My dream is to one day become a professor, where I will be able to show others that science is for anyone.”

Previous Yang postbacs include Alex Negron, Zoe Pearce, Ajani Stewart, and Maya Taliaferro.