Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Mapping language networks

The precise locations and shapes of language areas differ across individuals, so to find the language network, researchers ask each person to perform a language task while scanning their brains with functional magnetic resonance imaging (fMRI). Listening to or reading sentences in one’s native language should activate the language network. To distinguish this network from other brain regions, researchers also ask participants to perform tasks that should not activate it, such as listening to an unfamiliar language or solving math problems.

Several years ago, Fedorenko began designing these “localizer” tasks for speakers of languages other than English. While most studies of the language network have used English speakers as subjects, English does not include many features commonly seen in other languages. For example, in English, word order tends to be fixed, while in other languages there is more flexibility in how words are ordered. Many of those languages instead use the addition of morphemes, or segments of words, to convey additional meaning and relationships between words.

“There has been growing awareness for many years of the need to look at more languages, if you want make claims about how language works, as opposed to how English works,” Fedorenko says. “We thought it would be useful to develop tools to allow people to rigorously study language processing in the brain in other parts of the world. There’s now access to brain imaging technologies in many countries, but the basic paradigms that you would need to find the language-responsive areas in a person are just not there.”

For the new study, the researchers performed brain imaging of two speakers of 45 different languages, representing 12 different language families. Their goal was to see if key properties of the language network, such as location, left lateralization, and selectivity, were the same in those participants as in people whose native language is English.

The researchers decided to use “Alice in Wonderland” as the text that everyone would listen to, because it is one of the most widely translated works of fiction in the world. They selected 24 short passages and three long passages, each of which was recorded by a native speaker of the language. Each participant also heard nonsensical passages, which should not activate the language network, and was asked to do a variety of other cognitive tasks that should not activate it.

The team found that the language networks of participants in this study were found in approximately the same brain regions, and had the same selectivity, as those of native speakers of English.

“Language areas are selective,” Malik-Moraleda says. “They shouldn’t be responding during other tasks such as a spatial working memory task, and that was what we found across the speakers of 45 languages that we tested.”

Additionally, language regions that are typically activated together in English speakers, such as the frontal language areas and temporal language areas, were similarly synchronized in speakers of other languages.

The researchers also showed that among all of the subjects, the small amount of variation they saw between individuals who speak different languages was the same as the amount of variation that would typically be seen between native English speakers.

Similarities and differences

While the findings suggest that the overall architecture of the language network is similar across speakers of different languages, that doesn’t mean that there are no differences at all, Fedorenko says. As one example, researchers could now look for differences in speakers of languages that predominantly use morphemes, rather than word order, to help determine the meaning of a sentence.

“There are all sorts of interesting questions you can ask about morphological processing that don’t really make sense to ask in English, because it has much less morphology,” Fedorenko says.

Another possibility is studying whether speakers of languages that use differences in tone to convey different word meanings would have a language network with stronger links to auditory brain regions that encode pitch.

Right now, Fedorenko’s lab is working on a study in which they are comparing the ‘temporal receptive fields’ of speakers of six typologically different languages, including Turkish, Mandarin, and Finnish. The temporal receptive field is a measure of how many words the language processing system can handle at a time, and for English, it has been shown to be six to eight words long.

“The language system seems to be working on chunks of just a few words long, and we’re trying to see if this constraint is universal across these other languages that we’re testing,” Fedorenko says.

The researchers are also working on creating language localizer tasks and finding study participants representing additional languages beyond the 45 from this study.

The research was funded by the National Institutes of Health and research funds from MIT’s Department of Brain and Cognitive Sciences, the McGovern Institute, and the Simons Center for the Social Brain. Malik-Moraleda was funded by a la Caixa Fellowship and a Friends of McGovern fellowship.

Three distinct brain circuits in the thalamus contribute to Parkinson’s symptoms

Parkinson’s disease is best-known as a disorder of movement. Patients often experience tremors, loss of balance, and difficulty initiating movement. The disease also has lesser-known symptoms that are nonmotor, including depression.

In a study of a small region of the thalamus, MIT neuroscientists have now identified three distinct circuits that influence the development of both motor and nonmotor symptoms of Parkinson’s. Furthermore, they found that by manipulating these circuits, they could reverse Parkinson’s symptoms in mice.

The findings suggest that those circuits could be good targets for new drugs that could help combat many of the symptoms of Parkinson’s disease, the researchers say.

“We know that the thalamus is important in Parkinson’s disease, but a key question is how can you put together a circuit that that can explain many different things happening in Parkinson’s disease. Understanding different symptoms at a circuit level can help guide us in the development of better therapeutics,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in Nature. Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, and Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, are the lead authors of the paper.

Tracing circuits

The thalamus consists of several different regions that perform a variety of functions. Many of these, including the parafascicular (PF) thalamus, help to control movement. Degeneration of these structures is often seen in patients with Parkinson’s disease, which is thought to contribute to their motor symptoms.

In this study, the MIT team set out to try to trace how the PF thalamus is connected to other brain regions, in hopes of learning more about its functions. They found that neurons of the PF thalamus project to three different parts of the basal ganglia, a cluster of structures involved in motor control and other functions: the caudate putamen (CPu), the subthalamic nucleus (STN), and the nucleus accumbens (NAc).

“We started with showing these different circuits, and we demonstrated that they’re mostly nonoverlapping, which strongly suggests that they have distinct functions,” Roy says.

Further studies revealed those functions. The circuit that projects to the CPu appears to be involved in general locomotion, and functions to dampen movement. When the researchers inhibited this circuit, mice spent more time moving around the cage they were in.

The circuit that extends into the STN, on the other hand, is important for motor learning — the ability to learn a new motor skill through practice. The researchers found that this circuit is necessary for a task in which the mice learn to balance on a rod that spins with increasing speed.

Lastly, the researchers found that, unlike the others, the circuit that connects the PF thalamus to the NAc is not involved in motor activity. Instead, it appears to be linked to motivation. Inhibiting this circuit generates depression-like behaviors in healthy mice, and they will no longer seek a reward such as sugar water.

Druggable targets

Once the researchers established the functions of these three circuits, they decided to explore how they might be affected in Parkinson’s disease. To do that, they used a mouse model of Parkinson’s, in which dopamine-producing neurons in the midbrain are lost.

They found that in this Parkinson’s model, the connection between the PF thalamus and the CPu was enhanced, and that this led to a decrease in overall movement. Additionally, the connections from the PF thalamus to the STN were weakened, which made it more difficult for the mice to learn the accelerating rod task.

Lastly, the researchers showed that in the Parkinson’s model, connections from the PF thalamus to the NAc were also interrupted, and that this led to depression-like symptoms in the mice, including loss of motivation.

Using chemogenetics or optogenetics, which allows them to control neuronal activity with a drug or light, the researchers found that they could manipulate each of these three circuits and in doing so, reverse each set of Parkinson’s symptoms. Then, they decided to look for molecular targets that might be “druggable,” and found that each of the three PF thalamus regions have cells that express different types of cholinergic receptors, which are activated by the neurotransmitter acetylcholine. By blocking or activating those receptors, depending on the circuit, they were also able to reverse the Parkinson’s symptoms.

“We found three distinct cholinergic receptors that can be expressed in these three different PF circuits, and if we use antagonists or agonists to modulate these three different PF populations, we can rescue movement, motor learning, and also depression-like behavior in PD mice,” Zhang says.

Parkinson’s patients are usually treated with L-dopa, a precursor of dopamine. While this drug helps patients regain motor control, it doesn’t help with motor learning or any nonmotor symptoms, and over time, patients become resistant to it.

The researchers hope that the circuits they characterized in this study could be targets for new Parkinson’s therapies. The types of neurons that they identified in the circuits of the mouse brain are also found in the nonhuman primate brain, and the researchers are now using RNA sequencing to find genes that are expressed specifically in those cells.

“RNA-sequencing technology will allow us to do a much more detailed molecular analysis in a cell-type specific way,” Feng says. “There may be better druggable targets in these cells, and once you know the specific cell types you want to modulate, you can identify all kinds of potential targets in them.”

The research was funded, in part, by the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, the National Institutes of Health BRAIN Initiative, and the National Institute of Mental Health.

New research center focused on brain-body relationship established at MIT

The inextricable link between our brains and our bodies has been gaining increasing recognition among researchers and clinicians over recent years. Studies have shown that the brain-body pathway is bidirectional — meaning that our mental state can influence our physical health and vice versa. But exactly how the two interact is less clear.

A new research center at MIT, funded by a $38 million gift to the McGovern Institute for Brain Research from philanthropist K. Lisa Yang, aims to unlock this mystery by creating and applying novel tools to explore the multidirectional, multilevel interplay between the brain and other body organ systems. This gift expands Yang’s exceptional philanthropic support of human health and basic science research at MIT over the past five years.

“Lisa Yang’s visionary gift enables MIT scientists and engineers to pioneer revolutionary technologies and undertake rigorous investigations into the brain’s complex relationship with other organ systems,” says MIT President L. Rafael Reif.  “Lisa’s tremendous generosity empowers MIT scientists to make pivotal breakthroughs in brain and biomedical research and, collectively, improve human health on a grand scale.”

The K. Lisa Yang Brain-Body Center will be directed by Polina Anikeeva, professor of materials science and engineering and brain and cognitive sciences at MIT and an associate investigator at the McGovern Institute. The center will harness the power of MIT’s collaborative, interdisciplinary life sciences research and engineering community to focus on complex conditions and diseases affecting both the body and brain, with a goal of unearthing knowledge of biological mechanisms that will lead to promising therapeutic options.

“Under Professor Anikeeva’s brilliant leadership, this wellspring of resources will encourage the very best work of MIT faculty, graduate fellows, and research — and ultimately make a real impact on the lives of many,” Reif adds.

microscope image of gut
Mouse small intestine stained to reveal cell nucleii (blue) and peripheral nerve fibers (red).
Image: Polina Anikeeva, Marie Manthey, Kareena Villalobos

Center goals  

Initial projects in the center will focus on four major lines of research:

  • Gut-Brain: Anikeeva’s group will expand a toolbox of new technologies and apply these tools to examine major neurobiological questions about gut-brain pathways and connections in the context of autism spectrum disorders, Parkinson’s disease, and affective disorders.
  • Aging: CRISPR pioneer Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and investigator at the McGovern Institute, will lead a group in developing molecular tools for precision epigenomic editing and erasing accumulated “errors” of time, injury, or disease in various types of cells and tissues.
  • Pain: The lab of Fan Wang, investigator at the McGovern Institute and professor of brain and cognitive sciences, will design new tools and imaging methods to study autonomic responses, sympathetic-parasympathetic system balance, and brain-autonomic nervous system interactions, including how pain influences these interactions.
  • Acupuncture: Wang will also collaborate with Hilda (“Scooter”) Holcombe, a veterinarian in MIT’s Division of Comparative Medicine, to advance techniques for documenting changes in brain and peripheral tissues induced by acupuncture in mouse models. If successful, these techniques could lay the groundwork for deeper understandings of the mechanisms of acupuncture, specifically how the treatment stimulates the nervous system and restores function.

A key component of the K. Lisa Yang Brain-Body Center will be a focus on educating and training the brightest young minds who aspire to make true breakthroughs for individuals living with complex and often devastating diseases. A portion of center funding will endow the new K. Lisa Yang Brain-Body Fellows Program, which will support four annual fellowships for MIT graduate students and postdocs working to advance understanding of conditions that affect both the body and brain.

Mens sana in corpore sano

“A phrase I remember reading in secondary school has always stuck with me: ‘mens sana in corpore sano’ ‘a healthy mind in a healthy body,’” says Lisa Yang, a former investment banker committed to advocacy for individuals with visible and invisible disabilities. “When we look at how stress, nutrition, pain, immunity, and other complex factors impact our health, we truly see how inextricably linked our brains and bodies are. I am eager to help MIT scientists and engineers decode these links and make real headway in creating therapeutic strategies that result in longer, healthier lives.”

“This center marks a once-in-a-lifetime opportunity for labs like mine to conduct bold and risky studies into the complexities of brain-body connections,” says Anikeeva, who works at the intersection of materials science, electronics, and neurobiology. “The K. Lisa Yang Brain-Body Center will offer a pathbreaking, holistic approach that bridges multiple fields of study. I have no doubt that the center will result in revolutionary strides in our understanding of the inextricable bonds between the brain and the body’s peripheral organ systems, and a bold new way of thinking in how we approach human health overall.”

Lindsay Case and Guangyu Robert Yang named 2022 Searle Scholars

MIT cell biologist Lindsay Case and computational neuroscientist Guangyu Robert Yang have been named 2022 Searle Scholars, an award given annually to 15 outstanding U.S. assistant professors who have high potential for ongoing innovative research contributions in medicine, chemistry, or the biological sciences.

Case is an assistant professor of biology, while Yang is an assistant professor of brain and cognitive sciences and electrical engineering and computer science, and an associate investigator at the McGovern Institute for Brain Research. They will each receive $300,000 in flexible funding to support their high-risk, high-reward work over the next three years.

Lindsay Case

Case arrived at MIT in 2021, after completing a postdoc at the University of Texas Southwestern Medical Center in the lab of Michael Rosen. Prior to that, she earned her PhD from the University of North Carolina at Chapel Hill, working in the lab of Clare Waterman at the National Heart Lung and Blood Institute.

Situated in MIT’s Building 68, Case’s lab studies how molecules within cells organize themselves, and how such organization begets cellular function. Oftentimes, molecules will assemble at the cell’s plasma membrane — a complex signaling platform where hundreds of receptors sense information from outside the cell and initiate cellular changes in response. Through her experiments, Case has found that molecules at the plasma membrane can undergo a process known as phase separation, condensing to form liquid-like droplets.

As a Searle Scholar, Case is investigating the role that phase separation plays in regulating a specific class of signaling molecules called kinases. Her team will take a multidisciplinary approach to probe what happens when kinases phase separate into signaling clusters, and what cellular changes occur as a result. Because phase separation is emerging as a promising new target for small molecule therapies, this work will help identify kinases that are strong candidates for new therapeutic interventions to treat diseases such as cancer.

“I am honored to be recognized by the Searle Scholars Program, and thrilled to join such an incredible community of scientists,” Case says. “This support will enable my group to broaden our research efforts and take our preliminary findings in exciting new directions. I look forward to better understanding how phase separation impacts cellular function.”

Guangyu Robert Yang

Before coming to MIT in 2021, Yang trained in physics at Peking University, obtained a PhD in computational neuroscience at New York University with Xiao-Jing Wang, and further trained as a postdoc at the Center for Theoretical Neuroscience of Columbia University, as an intern at Google Brain, and as a junior fellow at the Simons Society of Fellows.

His research team at MIT, the MetaConscious Group, develops models of mental functions by incorporating multiple interacting modules. They are designing pipelines to process and compare large-scale experimental datasets that span modalities ranging from behavioral data to neural activity data to molecular data. These datasets are then be integrated to train individual computational modules based on the experimental tasks that were evaluated such as vision, memory, or movement.

Ultimately, Yang seeks to combine these modules into a “network of networks” that models higher-level brain functions such as the ability to flexibly and rapidly learn a variety of tasks. Such integrative models are rare because, until recently, it was not possible to acquire data that spans modalities and brain regions in real time as animals perform tasks. The time is finally right for integrative network models. Computational models that incorporate such multisystem, multilevel datasets will allow scientists to make new predictions about the neural basis of cognition and open a window to a mathematical understanding the mind.

“This is a new research direction for me, and I think for the field too. It comes with many exciting opportunities as well as challenges. Having this recognition from the Searle Scholars program really gives me extra courage to take on the uncertainties and challenges,” says Yang.

Since 1981, 647 scientists have been named Searle Scholars. Including this year, the program has awarded more than $147 million. Eighty-five Searle Scholars have been inducted into the National Academy of Sciences. Twenty scholars have been recognized with a MacArthur Fellowship, known as the “genius grant,” and two Searle Scholars have been awarded the Nobel Prize in Chemistry. The Searle Scholars Program is funded through the Searle Funds at The Chicago Community Trust and administered by Kinship Foundation.

A brain circuit in the thalamus helps us hold information in mind

As people age, their working memory often declines, making it more difficult to perform everyday tasks. One key brain region linked to this type of memory is the anterior thalamus, which is primarily involved in spatial memory — memory of our surroundings and how to navigate them.

In a study of mice, MIT researchers have identified a circuit in the anterior thalamus that is necessary for remembering how to navigate a maze. The researchers also found that this circuit is weakened in older mice, but enhancing its activity greatly improves their ability to run the maze correctly.

This region could offer a promising target for treatments that could help reverse memory loss in older people, without affecting other parts of the brain, the researchers say.

“By understanding how the thalamus controls cortical output, hopefully we could find more specific and druggable targets in this area, instead of generally modulating the prefrontal cortex, which has many different functions,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in the Proceedings of the National Academy of Sciences. Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, and Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, are the lead authors of the paper.

Spatial memory

The thalamus, a small structure located near the center of the brain, contributes to working memory and many other executive functions, such as planning and attention. Feng’s lab has recently been investigating a region of the thalamus known as the anterior thalamus, which has important roles in memory and spatial navigation.

Previous studies in mice have shown that damage to the anterior thalamus leads to impairments in spatial working memory. In humans, studies have revealed age-related decline in anterior thalamus activity, which is correlated with lower performance on spatial memory tasks.

The anterior thalamus is divided into three sections: ventral, dorsal, and medial. In a study published last year, Feng, Roy and Zhang studied the role of the anterodorsal (AD) thalamus and anteroventral (AV) thalamus in memory formation. They found that the AD thalamus is involved in creating mental maps of physical spaces, while the AV thalamus helps the brain to distinguish these memories from other memories of similar spaces.

In their new study, the researchers wanted to look more deeply at the AV thalamus, exploring its role in a spatial working memory task. To do that, they trained mice to run a simple T-shaped maze. At the beginning of each trial, the mice ran until they reached the T. One arm was blocked off, forcing them to run down the other arm. Then, the mice were placed in the maze again, with both arms open. The mice were rewarded if they chose the opposite arm from the first run. This meant that in order to make the correct decision, they had to remember which way they had turned on the previous run.

As the mice performed the task, the researchers used optogenetics to inhibit activity of either AV or AD neurons during three different parts of the task: the sample phase, which occurs during the first run; the delay phase, while they are waiting for the second run to begin; and the choice phase, when the mice make their decision which way to turn during the second run.

The researchers found that inhibiting AV neurons during the sample or choice phases had no effect on the mice’s performance, but when they suppressed AV activity during the delay phase, which lasted 10 seconds or longer, the mice performed much worse on the task.

This suggests that the AV neurons are most important for keeping information in mind while it is needed for a task. In contrast, inhibiting the AD neurons disrupted performance during the sample phase but had little effect during the delay phase. This finding was consistent with the research team’s earlier study showing that AD neurons are involved in forming memories of a physical space.

“The anterior thalamus in general is a spatial learning region, but the ventral neurons seem to be needed in this maintenance period, during this short delay,” Roy says. “Now we have two subdivisions within the anterior thalamus: one that seems to help with contextual learning and the other that actually helps with holding this information.”

Age-related decline

The researchers then tested the effects of age on this circuit. They found that older mice (14 months) performed worse on the T-maze task and their AV neurons were less excitable. However, when the researchers artificially stimulated those neurons, the mice’s performance on the task dramatically improved.

Another way to enhance performance in this memory task is to stimulate the prefrontal cortex, which also undergoes age-related decline. However, activating the prefrontal cortex also increases measures of anxiety in the mice, the researchers found.

“If we directly activate neurons in medial prefrontal cortex, it will also elicit anxiety-related behavior, but this will not happen during AV activation,” Zhang says. “That is an advantage of activating AV compared to prefrontal cortex.”

If a noninvasive or minimally invasive technology could be used to stimulate those neurons in the human brain, it could offer a way to help prevent age-related memory decline, the researchers say. They are now planning to perform single-cell RNA sequencing of neurons of the anterior thalamus to find genetic signatures that could be used to identify cells that would make the best targets.

The research was funded, in part, by the Stanley Center for Psychiatric Research at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT.

Circuit that focuses attention brings in wide array of inputs

In a new brain-wide circuit tracing study, scientists at MIT’s Picower Institute for Learning and Memory focused selective attention on a circuit that governs, fittingly enough, selective attention. The comprehensive maps they produced illustrate how broadly the mammalian brain incorporates and integrates information to focus its sensory resources on its goals.

Working in mice, the team traced thousands of inputs into the circuit, a communication loop between the anterior cingulate cortex (ACC) and the lateral posterior (LP) thalamus. In primates the LP is called the pulvinar. Studies in humans and nonhuman primates have indicated that the byplay of these two regions is critical for brain functions like being able to focus on an object of interest in a crowded scene, says study co-lead author Yi Ning Leow, a graduate student in the lab of senior author Mriganka Sur, the Newton Professor in MIT’s Department of Brain and Cognitive Sciences. Research has implicated dysfunction in the circuit in attention-affecting disorders such as autism and attention deficit/hyperactivity disorder.

The new study in the Journal of Comparative Neurology extends what’s known about the circuit by detailing it in mice, Leow says, importantly showing that the mouse circuit is closely analogous to the primate version even if the LP is proportionately smaller and less evolved than the pulvinar.

“In these rodent models we were able to find very similar circuits,” Leow says. “So we can possibly study these higher-order functions in mice as well. We have a lot more genetic tools in mice so we are better able to look at this circuit.”

The study, also co-led by former MIT undergraduate Blake Zhou, therefore provides a detailed roadmap in the experimentally accessible mouse model for understanding how the ACC and LP cooperate to produce selective attention. For instance, now that Leow and Zhou have located all the inputs that are wired into the circuit, Leow is tapping into those feeds to eavesdrop on the information they are carrying. Meanwhile, she is correlating that information flow with behavior.

“This study lays the groundwork for understanding one of the most important, yet most elusive, components of brain function, namely our ability to selectively attend to one thing out of several, as well as switch attention,” Sur says.

Using virally mediated circuit-tracing techniques pioneered by co-author Ian Wickersham, principal research scientist in brain and cognitive sciences and the McGovern Institute for Brain Research at MIT, the team found distinct sources of input for the ACC and the LP. Generally speaking, the detailed study finds that the majority of inputs to the ACC were from frontal cortex areas that typically govern goal-directed planning, and from higher visual areas. The bulk of inputs to the LP, meanwhile, were from deeper regions capable of providing context such as the mouse’s needs, location and spatial cues, information about movement, and general information from a mix of senses.

So even though focusing attention might seem like a matter of controlling the senses, Leow says, the circuit pulls in a lot of other information as well.

“We’re seeing that it’s not just sensory — there are so many inputs that are coming from non-sensory areas as well, both sub-cortically and cortically,” she says. “It seems to be integrating a lot of different aspects that might relate to the behavioral state of the animal at a given time. It provides a way to provide a lot of internal and special context for that sensory information.”

Given the distinct sets of inputs to each region, the ACC may be tasked with focusing attention on a desired object, while the LP is modulating how the ACC goes about making those computations, accounting for what’s going on both inside and outside the animal. Decoding just what that incoming contextual information is, and what the LP tells the ACC, are the key next steps, Leow says. Another clear set of questions the study raises are what are the circuit’s outputs. In other words, after it integrates all this information, what does it do with it?

The paper’s other authors are Heather Sullivan and Alexandria Barlowe.

A National Science Scholarship, the National Institutes of Health, and the JPB Foundation provided support for the study.

Approaching human cognition from many angles

In January, as the Charles River was starting to freeze over, Keith Murray and the other members of MIT’s men’s heavyweight crew team took to erging on the indoor rowing machine. For 80 minutes at a time, Murray endured one of the most grueling workouts of his college experience. To distract himself from the pain, he would talk with his teammates, covering everything from great philosophical ideas to personal coffee preferences.

For Murray, virtually any conversation is an opportunity to explore how people think and why they think in certain ways. Currently a senior double majoring in computation and cognition, and linguistics and philosophy, Murray tries to understand the human experience based on knowledge from all of these fields.

“I’m trying to blend different approaches together to understand the complexities of human cognition,” he says. “For example, from a physiological perspective, the brain is just billions of neurons firing all at once, but this hardly scratches the surface of cognition.”

Murray grew up in Corydon, Indiana, where he attended the Indiana Academy for Science, Mathematics, and Humanities during his junior year of high school. He was exposed to philosophy there, learning the ideas of Plato, Socrates, and Thomas Aquinas, to name a few. When looking at colleges, Murray became interested in MIT because he wanted to learn about human thought processes from different perspectives. “Coming to MIT, I knew I wanted to do something philosophical. But I wanted to also be on the more technical side of things,” he says.

Once on campus, Murray immediately pursued an opportunity through the Undergraduate Research Opportunity Program (UROP) in the Digital Humanities Lab. There he worked with language-processing technology to analyze gendered language in various novels, with the end goal of displaying the data for an online audience. He learned about the basic mathematical models used for analyzing and presenting data online, to study the social implications of linguistic phrases and expressions.

Murray also joined the Concourse learning community, which brought together different perspectives from the humanities, sciences, and math in a weekly seminar. “I was exposed to some excellent examples of how to do interdisciplinary work,” he recalls.

In the summer before his sophomore year, Murray took a position as a researcher in the Harnett Lab, where instead of working with novels, he was working with mice. Alongside postdoc Lucas Fisher, Murray trained mice to do navigational tasks using virtual reality equipment. His goal was to explore neural encoding in navigation, understanding why the mice behaved in certain ways after being shown certain stimuli on the screens. Spending time in the lab, Murray became increasingly interested in neuroscience and the biological components behind human thought processes.

He sought out other neuroscience-related research experiences, which led him to explore a SuperUROP project in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Working under Professor Nancy Lynch, he designed theoretical models of the retina using machine learning. Murray was excited to apply the techniques he learned in 9.40 (Introduction to Neural Computation) to address complex neurological problems. Murray considers this one of his most challenging research experiences, as the experience was entirely online.

“It was during the pandemic, so I had to learn a lot on my own; I couldn’t exactly do research in a lab. It was a big challenge, but at the end, I learned a lot and ended up getting a publication out of it,” he reflects.

This past semester, Murray has worked in the lab of Professor Ila Fiete in the McGovern Institute for Brain Research, constructing deep-learning models of animals performing navigational tasks. Through this UROP, which builds on his final project from Fiete’s class 9.49 (Neural Circuits for Cognition), Murray has been working to incorporate existing theoretical models of the hippocampus to investigate the intersection between artificial intelligence and neuroscience.

Reflecting on his varied research experiences, Murray says they have shown him new ways to explore the human brain from multiple perspectives, something he finds helpful as he tries to understand the complexity of human behavior.

Outside of his academic pursuits, Murray has continued to row with the crew team, where he walked on his first year. He sees rowing as a way to build up his strength, both physically and mentally. “When I’m doing my class work or I’m thinking about projects, I am using the same mental toughness that I developed during rowing,” he says. “That’s something I learned at MIT, to cultivate the dedication you put toward something. It’s all the same mental toughness whether you apply it to physical activities like rowing, or research projects.”

Looking ahead, Murray hopes to pursue a PhD in neuroscience, looking to find ways to incorporate his love of philosophy and human thought into his cognitive research. “I think there’s a lot more to do with neuroscience, especially with artificial intelligence. There are so many new technological developments happening right now,” he says.

Aging Brain Initiative awards fund five new ideas to study, fight neurodegeneration

Neurodegenerative diseases are defined by an increasingly widespread and debilitating death of nervous system cells, but they also share other grim characteristics: Their cause is rarely discernible and they have all eluded cures. To spur fresh, promising approaches and to encourage new experts and expertise to join the field, MIT’s Aging Brain Initiative (ABI) this month awarded five seed grants after a competition among labs across the Institute.

Founded in 2015 by nine MIT faculty members, the ABI promotes research, symposia, and related activities to advance fundamental insights that can lead to clinical progress against neurodegenerative conditions, such as Alzheimer’s disease, with an age-related onset. With an emphasis on spurring research at an early stage before it is established enough to earn more traditional funding, the ABI derives support from philanthropic gifts.

“Solving the mysteries of how health declines in the aging brain and turning that knowledge into effective tools, treatments, and technologies is of the utmost urgency given the millions of people around the world who suffer with no meaningful treatment options,” says ABI director and co-founder Li-Huei Tsai, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “We were very pleased that many groups across MIT were eager to contribute their expertise and creativity to that goal. From here, five teams will be able to begin testing their innovative ideas and the impact they could have.”

To address the clinical challenge of accurately assessing cognitive decline during Alzheimer’s disease progression and healthy aging, a team led by Thomas Heldt, associate professor of electrical and biomedical engineering in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Medical Engineering and Science, proposes to use artificial intelligence tools to bring diagnostics based on eye movements during cognitive tasks to everyday consumer electronics such as smartphones and tablets. By moving these capabilities to common at-home platforms, the team, which also includes EECS Associate Professor Vivian Sze, hopes to increase monitoring beyond what can only be intermittently achieved with high-end specialized equipment and dedicated staffing in specialists’ offices. The team will pilot their technology in a small study at Boston Medical Center in collaboration with neurosurgeon James Holsapple.

Institute Professor Ann Graybiel’s lab in the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research will test the hypothesis that mutations on a specific gene may lead to the early emergence of Alzheimer’s disease (AD) pathology in the striatum. That’s a a brain region crucial for motivation and movement that is directly and severely impacted by other neurodegenerative disorders including Parkinson’s and Huntington’s diseases, but that has largely been unstudied in Alzheimer’s. By editing the mutations into normal and AD-modeling mice, Research Scientist Ayano Matsushima and Graybiel hope to determine whether and how pathology, such as the accumulation of amyloid proteins, may result. Determining that could provide new insight into the progression of disease and introduce a new biomarker in a region that virtually all other studies have overlooked.

Numerous recent studies have highlighted a potential role for immune inflammation in Alzheimer’s disease. A team led by Gloria Choi, the Mark Hyman Jr. Associate Professor in BCS and The Picower Institute for Learning and Memory, will track one potential source of such activity by determining whether the brain’s meninges, which envelop the brain, becomes a means for immune cells activated by gut bacteria to circulate near the brain, where they may release signaling molecules that promote Alzheimer’s pathology. Working in mice, Choi’s lab will test whether such activity is prone to increase in Alzheimer’s and whether it contributes to disease.

A collaboration led by Peter Dedon, the Singapore Professor in MIT’s Department of Biological Engineering, will explore whether Alzheimer’s pathology is driven by dysregulation of transfer RNAs (tRNAs) and the dozens of natural tRNA modifications in the epitranscriptome, which play a key role in the process by which proteins are assembled based on genetic instructions. With Benjamin Wolozin of Boston University, Sherif Rashad of Tohoku University in Japan, and Thomas Begley of the State University of New York at Albany, Dedon will assess how the tRNA pool and epitranscriptome may differ in Alzheimer’s model mice and whether genetic instructions mistranslated because of tRNA dysregulation play a role in Alzheimer’s disease.

With her seed grant, Ritu Raman, the d’Arbeloff Assistant Professor of Mechanical Engineering, is launching an investigation of possible disruption of intercellular messages in amyotrophic lateral sclerosis (ALS), a terminal condition in which motor neuron causes loss of muscle control. Equipped with a new tool to finely sample interstitial fluid within tissues, Raman’s team will be able to monitor and compare cell-cell signaling in models of the junction between nerve and muscle. These models will be engineered from stem cells derived from patients with ALS. By studying biochemical signaling at the junction the lab hopes to discover new targets that could be therapeutically modified.

Major support for the seed grants, which provide each lab with $100,000, came from generous gifts by David Emmes SM ’76; Kathleen SM ’77, PhD ’86 and Miguel Octavio; the Estate of Margaret A. Ridge-Pappis, wife of the late James Pappis ScD ’59; the Marc Haas Foundation; and the family of former MIT President Paul Gray ’54, SM ’55, ScD ‘60, with additional funding from many annual fund donors to the Aging Brain Initiative Fund.

Study finds neurons that encode the outcomes of actions

When we make complex decisions, we have to take many factors into account. Some choices have a high payoff but carry potential risks; others are lower risk but may have a lower reward associated with them.

A new study from MIT sheds light on the part of the brain that helps us make these types of decisions. The research team found a group of neurons in the brain’s striatum that encodes information about the potential outcomes of different decisions. These cells become particularly active when a behavior leads a different outcome than what was expected, which the researchers believe helps the brain adapt to changing circumstances.

“A lot of this brain activity deals with surprising outcomes, because if an outcome is expected, there’s really nothing to be learned. What we see is that there’s a strong encoding of both unexpected rewards and unexpected negative outcomes,” says Bernard Bloem, a former MIT postdoc and one of the lead authors of the new study.

Impairments in this kind of decision-making are a hallmark of many neuropsychiatric disorders, especially anxiety and depression. The new findings suggest that slight disturbances in the activity of these striatal neurons could swing the brain into making impulsive decisions or becoming paralyzed with indecision, the researchers say.

Rafiq Huda, a former MIT postdoc, is also a lead author of the paper, which appears in Nature Communications. Ann Graybiel, an MIT Institute Professor and member of MIT’s McGovern Institute for Brain Research, is the senior author of the study.

Learning from experience

The striatum, located deep within the brain, is known to play a key role in making decisions that require evaluating outcomes of a particular action. In this study, the researchers wanted to learn more about the neural basis of how the brain makes cost-benefit decisions, in which a behavior can have a mixture of positive and negative outcomes.

Striosomes (red) appear and then disappear as the view moves deeper into the striatum. Video courtesy of the researchers

To study this kind of decision-making, the researchers trained mice to spin a wheel to the left or the right. With each turn, they would receive a combination of reward (sugary water) and negative outcome (a small puff of air). As the mice performed the task, they learned to maximize the delivery of rewards and to minimize the delivery of air puffs. However, over hundreds of trials, the researchers frequently changed the probabilities of getting the reward or the puff of air, so the mice would need to adjust their behavior.

As the mice learned to make these adjustments, the researchers recorded the activity of neurons in the striatum. They had expected to find neuronal activity that reflects which actions are good and need to be repeated, or bad and that need to be avoided. While some neurons did this, the researchers also found, to their surprise, that many neurons encoded details about the relationship between the actions and both types of outcomes.

The researchers found that these neurons responded more strongly when a behavior resulted in an unexpected outcome, that is, when turning the wheel in one direction produced the opposite outcome as it had in previous trials. These “error signals” for reward and penalty seem to help the brain figure out that it’s time to change tactics.

Most of the neurons that encode these error signals are found in the striosomes — clusters of neurons located in the striatum. Previous work has shown that striosomes send information to many other parts of the brain, including dopamine-producing regions and regions involved in planning movement.

“The striosomes seem to mostly keep track of what the actual outcomes are,” Bloem says. “The decision whether to do an action or not, which essentially requires integrating multiple outcomes, probably happens somewhere downstream in the brain.”

Making judgments

The findings could be relevant not only to mice learning a task, but also to many decisions that people have to make every day as they weigh the risks and benefits of each choice. Eating a big bowl of ice cream after dinner leads to immediate gratification, but it might contribute to weight gain or poor health. Deciding to have carrots instead will make you feel healthier, but you’ll miss out on the enjoyment of the sweet treat.

“From a value perspective, these can be considered equally good,” Bloem says. “What we find is that the striatum also knows why these are good, and it knows what are the benefits and the cost of each. In a way, the activity there reflects much more about the potential outcome than just how likely you are to choose it.”

This type of complex decision-making is often impaired in people with a variety of neuropsychiatric disorders, including anxiety, depression, schizophrenia, obsessive-compulsive disorder, and posttraumatic stress disorder. Drug abuse can also lead to impaired judgment and impulsivity.

“You can imagine that if things are set up this way, it wouldn’t be all that difficult to get mixed up about what is good and what is bad, because there are some neurons that fire when an outcome is good and they also fire when the outcome is bad,” Graybiel says. “Our ability to make our movements or our thoughts in what we call a normal way depends on those distinctions, and if they get blurred, it’s real trouble.”

The new findings suggest that behavioral therapy targeting the stage at which information about potential outcomes is encoded in the brain may help people who suffer from those disorders, the researchers say.

The research was funded by the National Institutes of Health/National Institute of Mental Health, the Saks Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, the Simons Foundation, the Nancy Lurie Marks Family Foundation, the National Eye Institute, the National Institute of Neurological Disease and Stroke, the National Science Foundation, the Simons Foundation Autism Research Initiative, and JSPS KAKENHI.

Setting carbon management in stone

Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

“If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

“That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, and Yogesh Surendranath, the Paul M Cook Career Development associate professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

“We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

“Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.”