New research center focused on brain-body relationship established at MIT

The inextricable link between our brains and our bodies has been gaining increasing recognition among researchers and clinicians over recent years. Studies have shown that the brain-body pathway is bidirectional — meaning that our mental state can influence our physical health and vice versa. But exactly how the two interact is less clear.

A new research center at MIT, funded by a $38 million gift to the McGovern Institute for Brain Research from philanthropist K. Lisa Yang, aims to unlock this mystery by creating and applying novel tools to explore the multidirectional, multilevel interplay between the brain and other body organ systems. This gift expands Yang’s exceptional philanthropic support of human health and basic science research at MIT over the past five years.

“Lisa Yang’s visionary gift enables MIT scientists and engineers to pioneer revolutionary technologies and undertake rigorous investigations into the brain’s complex relationship with other organ systems,” says MIT President L. Rafael Reif.  “Lisa’s tremendous generosity empowers MIT scientists to make pivotal breakthroughs in brain and biomedical research and, collectively, improve human health on a grand scale.”

The K. Lisa Yang Brain-Body Center will be directed by Polina Anikeeva, professor of materials science and engineering and brain and cognitive sciences at MIT and an associate investigator at the McGovern Institute. The center will harness the power of MIT’s collaborative, interdisciplinary life sciences research and engineering community to focus on complex conditions and diseases affecting both the body and brain, with a goal of unearthing knowledge of biological mechanisms that will lead to promising therapeutic options.

“Under Professor Anikeeva’s brilliant leadership, this wellspring of resources will encourage the very best work of MIT faculty, graduate fellows, and research — and ultimately make a real impact on the lives of many,” Reif adds.

microscope image of gut
Mouse small intestine stained to reveal cell nucleii (blue) and peripheral nerve fibers (red).
Image: Polina Anikeeva, Marie Manthey, Kareena Villalobos

Center goals  

Initial projects in the center will focus on four major lines of research:

  • Gut-Brain: Anikeeva’s group will expand a toolbox of new technologies and apply these tools to examine major neurobiological questions about gut-brain pathways and connections in the context of autism spectrum disorders, Parkinson’s disease, and affective disorders.
  • Aging: CRISPR pioneer Feng Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and investigator at the McGovern Institute, will lead a group in developing molecular tools for precision epigenomic editing and erasing accumulated “errors” of time, injury, or disease in various types of cells and tissues.
  • Pain: The lab of Fan Wang, investigator at the McGovern Institute and professor of brain and cognitive sciences, will design new tools and imaging methods to study autonomic responses, sympathetic-parasympathetic system balance, and brain-autonomic nervous system interactions, including how pain influences these interactions.
  • Acupuncture: Wang will also collaborate with Hilda (“Scooter”) Holcombe, a veterinarian in MIT’s Division of Comparative Medicine, to advance techniques for documenting changes in brain and peripheral tissues induced by acupuncture in mouse models. If successful, these techniques could lay the groundwork for deeper understandings of the mechanisms of acupuncture, specifically how the treatment stimulates the nervous system and restores function.

A key component of the K. Lisa Yang Brain-Body Center will be a focus on educating and training the brightest young minds who aspire to make true breakthroughs for individuals living with complex and often devastating diseases. A portion of center funding will endow the new K. Lisa Yang Brain-Body Fellows Program, which will support four annual fellowships for MIT graduate students and postdocs working to advance understanding of conditions that affect both the body and brain.

Mens sana in corpore sano

“A phrase I remember reading in secondary school has always stuck with me: ‘mens sana in corpore sano’ ‘a healthy mind in a healthy body,’” says Lisa Yang, a former investment banker committed to advocacy for individuals with visible and invisible disabilities. “When we look at how stress, nutrition, pain, immunity, and other complex factors impact our health, we truly see how inextricably linked our brains and bodies are. I am eager to help MIT scientists and engineers decode these links and make real headway in creating therapeutic strategies that result in longer, healthier lives.”

“This center marks a once-in-a-lifetime opportunity for labs like mine to conduct bold and risky studies into the complexities of brain-body connections,” says Anikeeva, who works at the intersection of materials science, electronics, and neurobiology. “The K. Lisa Yang Brain-Body Center will offer a pathbreaking, holistic approach that bridges multiple fields of study. I have no doubt that the center will result in revolutionary strides in our understanding of the inextricable bonds between the brain and the body’s peripheral organ systems, and a bold new way of thinking in how we approach human health overall.”

Lindsay Case and Guangyu Robert Yang named 2022 Searle Scholars

MIT cell biologist Lindsay Case and computational neuroscientist Guangyu Robert Yang have been named 2022 Searle Scholars, an award given annually to 15 outstanding U.S. assistant professors who have high potential for ongoing innovative research contributions in medicine, chemistry, or the biological sciences.

Case is an assistant professor of biology, while Yang is an assistant professor of brain and cognitive sciences and electrical engineering and computer science, and an associate investigator at the McGovern Institute for Brain Research. They will each receive $300,000 in flexible funding to support their high-risk, high-reward work over the next three years.

Lindsay Case

Case arrived at MIT in 2021, after completing a postdoc at the University of Texas Southwestern Medical Center in the lab of Michael Rosen. Prior to that, she earned her PhD from the University of North Carolina at Chapel Hill, working in the lab of Clare Waterman at the National Heart Lung and Blood Institute.

Situated in MIT’s Building 68, Case’s lab studies how molecules within cells organize themselves, and how such organization begets cellular function. Oftentimes, molecules will assemble at the cell’s plasma membrane — a complex signaling platform where hundreds of receptors sense information from outside the cell and initiate cellular changes in response. Through her experiments, Case has found that molecules at the plasma membrane can undergo a process known as phase separation, condensing to form liquid-like droplets.

As a Searle Scholar, Case is investigating the role that phase separation plays in regulating a specific class of signaling molecules called kinases. Her team will take a multidisciplinary approach to probe what happens when kinases phase separate into signaling clusters, and what cellular changes occur as a result. Because phase separation is emerging as a promising new target for small molecule therapies, this work will help identify kinases that are strong candidates for new therapeutic interventions to treat diseases such as cancer.

“I am honored to be recognized by the Searle Scholars Program, and thrilled to join such an incredible community of scientists,” Case says. “This support will enable my group to broaden our research efforts and take our preliminary findings in exciting new directions. I look forward to better understanding how phase separation impacts cellular function.”

Guangyu Robert Yang

Before coming to MIT in 2021, Yang trained in physics at Peking University, obtained a PhD in computational neuroscience at New York University with Xiao-Jing Wang, and further trained as a postdoc at the Center for Theoretical Neuroscience of Columbia University, as an intern at Google Brain, and as a junior fellow at the Simons Society of Fellows.

His research team at MIT, the MetaConscious Group, develops models of mental functions by incorporating multiple interacting modules. They are designing pipelines to process and compare large-scale experimental datasets that span modalities ranging from behavioral data to neural activity data to molecular data. These datasets are then be integrated to train individual computational modules based on the experimental tasks that were evaluated such as vision, memory, or movement.

Ultimately, Yang seeks to combine these modules into a “network of networks” that models higher-level brain functions such as the ability to flexibly and rapidly learn a variety of tasks. Such integrative models are rare because, until recently, it was not possible to acquire data that spans modalities and brain regions in real time as animals perform tasks. The time is finally right for integrative network models. Computational models that incorporate such multisystem, multilevel datasets will allow scientists to make new predictions about the neural basis of cognition and open a window to a mathematical understanding the mind.

“This is a new research direction for me, and I think for the field too. It comes with many exciting opportunities as well as challenges. Having this recognition from the Searle Scholars program really gives me extra courage to take on the uncertainties and challenges,” says Yang.

Since 1981, 647 scientists have been named Searle Scholars. Including this year, the program has awarded more than $147 million. Eighty-five Searle Scholars have been inducted into the National Academy of Sciences. Twenty scholars have been recognized with a MacArthur Fellowship, known as the “genius grant,” and two Searle Scholars have been awarded the Nobel Prize in Chemistry. The Searle Scholars Program is funded through the Searle Funds at The Chicago Community Trust and administered by Kinship Foundation.

A brain circuit in the thalamus helps us hold information in mind

As people age, their working memory often declines, making it more difficult to perform everyday tasks. One key brain region linked to this type of memory is the anterior thalamus, which is primarily involved in spatial memory — memory of our surroundings and how to navigate them.

In a study of mice, MIT researchers have identified a circuit in the anterior thalamus that is necessary for remembering how to navigate a maze. The researchers also found that this circuit is weakened in older mice, but enhancing its activity greatly improves their ability to run the maze correctly.

This region could offer a promising target for treatments that could help reverse memory loss in older people, without affecting other parts of the brain, the researchers say.

“By understanding how the thalamus controls cortical output, hopefully we could find more specific and druggable targets in this area, instead of generally modulating the prefrontal cortex, which has many different functions,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in the Proceedings of the National Academy of Sciences. Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, and Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, are the lead authors of the paper.

Spatial memory

The thalamus, a small structure located near the center of the brain, contributes to working memory and many other executive functions, such as planning and attention. Feng’s lab has recently been investigating a region of the thalamus known as the anterior thalamus, which has important roles in memory and spatial navigation.

Previous studies in mice have shown that damage to the anterior thalamus leads to impairments in spatial working memory. In humans, studies have revealed age-related decline in anterior thalamus activity, which is correlated with lower performance on spatial memory tasks.

The anterior thalamus is divided into three sections: ventral, dorsal, and medial. In a study published last year, Feng, Roy and Zhang studied the role of the anterodorsal (AD) thalamus and anteroventral (AV) thalamus in memory formation. They found that the AD thalamus is involved in creating mental maps of physical spaces, while the AV thalamus helps the brain to distinguish these memories from other memories of similar spaces.

In their new study, the researchers wanted to look more deeply at the AV thalamus, exploring its role in a spatial working memory task. To do that, they trained mice to run a simple T-shaped maze. At the beginning of each trial, the mice ran until they reached the T. One arm was blocked off, forcing them to run down the other arm. Then, the mice were placed in the maze again, with both arms open. The mice were rewarded if they chose the opposite arm from the first run. This meant that in order to make the correct decision, they had to remember which way they had turned on the previous run.

As the mice performed the task, the researchers used optogenetics to inhibit activity of either AV or AD neurons during three different parts of the task: the sample phase, which occurs during the first run; the delay phase, while they are waiting for the second run to begin; and the choice phase, when the mice make their decision which way to turn during the second run.

The researchers found that inhibiting AV neurons during the sample or choice phases had no effect on the mice’s performance, but when they suppressed AV activity during the delay phase, which lasted 10 seconds or longer, the mice performed much worse on the task.

This suggests that the AV neurons are most important for keeping information in mind while it is needed for a task. In contrast, inhibiting the AD neurons disrupted performance during the sample phase but had little effect during the delay phase. This finding was consistent with the research team’s earlier study showing that AD neurons are involved in forming memories of a physical space.

“The anterior thalamus in general is a spatial learning region, but the ventral neurons seem to be needed in this maintenance period, during this short delay,” Roy says. “Now we have two subdivisions within the anterior thalamus: one that seems to help with contextual learning and the other that actually helps with holding this information.”

Age-related decline

The researchers then tested the effects of age on this circuit. They found that older mice (14 months) performed worse on the T-maze task and their AV neurons were less excitable. However, when the researchers artificially stimulated those neurons, the mice’s performance on the task dramatically improved.

Another way to enhance performance in this memory task is to stimulate the prefrontal cortex, which also undergoes age-related decline. However, activating the prefrontal cortex also increases measures of anxiety in the mice, the researchers found.

“If we directly activate neurons in medial prefrontal cortex, it will also elicit anxiety-related behavior, but this will not happen during AV activation,” Zhang says. “That is an advantage of activating AV compared to prefrontal cortex.”

If a noninvasive or minimally invasive technology could be used to stimulate those neurons in the human brain, it could offer a way to help prevent age-related memory decline, the researchers say. They are now planning to perform single-cell RNA sequencing of neurons of the anterior thalamus to find genetic signatures that could be used to identify cells that would make the best targets.

The research was funded, in part, by the Stanley Center for Psychiatric Research at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT.

Circuit that focuses attention brings in wide array of inputs

In a new brain-wide circuit tracing study, scientists at MIT’s Picower Institute for Learning and Memory focused selective attention on a circuit that governs, fittingly enough, selective attention. The comprehensive maps they produced illustrate how broadly the mammalian brain incorporates and integrates information to focus its sensory resources on its goals.

Working in mice, the team traced thousands of inputs into the circuit, a communication loop between the anterior cingulate cortex (ACC) and the lateral posterior (LP) thalamus. In primates the LP is called the pulvinar. Studies in humans and nonhuman primates have indicated that the byplay of these two regions is critical for brain functions like being able to focus on an object of interest in a crowded scene, says study co-lead author Yi Ning Leow, a graduate student in the lab of senior author Mriganka Sur, the Newton Professor in MIT’s Department of Brain and Cognitive Sciences. Research has implicated dysfunction in the circuit in attention-affecting disorders such as autism and attention deficit/hyperactivity disorder.

The new study in the Journal of Comparative Neurology extends what’s known about the circuit by detailing it in mice, Leow says, importantly showing that the mouse circuit is closely analogous to the primate version even if the LP is proportionately smaller and less evolved than the pulvinar.

“In these rodent models we were able to find very similar circuits,” Leow says. “So we can possibly study these higher-order functions in mice as well. We have a lot more genetic tools in mice so we are better able to look at this circuit.”

The study, also co-led by former MIT undergraduate Blake Zhou, therefore provides a detailed roadmap in the experimentally accessible mouse model for understanding how the ACC and LP cooperate to produce selective attention. For instance, now that Leow and Zhou have located all the inputs that are wired into the circuit, Leow is tapping into those feeds to eavesdrop on the information they are carrying. Meanwhile, she is correlating that information flow with behavior.

“This study lays the groundwork for understanding one of the most important, yet most elusive, components of brain function, namely our ability to selectively attend to one thing out of several, as well as switch attention,” Sur says.

Using virally mediated circuit-tracing techniques pioneered by co-author Ian Wickersham, principal research scientist in brain and cognitive sciences and the McGovern Institute for Brain Research at MIT, the team found distinct sources of input for the ACC and the LP. Generally speaking, the detailed study finds that the majority of inputs to the ACC were from frontal cortex areas that typically govern goal-directed planning, and from higher visual areas. The bulk of inputs to the LP, meanwhile, were from deeper regions capable of providing context such as the mouse’s needs, location and spatial cues, information about movement, and general information from a mix of senses.

So even though focusing attention might seem like a matter of controlling the senses, Leow says, the circuit pulls in a lot of other information as well.

“We’re seeing that it’s not just sensory — there are so many inputs that are coming from non-sensory areas as well, both sub-cortically and cortically,” she says. “It seems to be integrating a lot of different aspects that might relate to the behavioral state of the animal at a given time. It provides a way to provide a lot of internal and special context for that sensory information.”

Given the distinct sets of inputs to each region, the ACC may be tasked with focusing attention on a desired object, while the LP is modulating how the ACC goes about making those computations, accounting for what’s going on both inside and outside the animal. Decoding just what that incoming contextual information is, and what the LP tells the ACC, are the key next steps, Leow says. Another clear set of questions the study raises are what are the circuit’s outputs. In other words, after it integrates all this information, what does it do with it?

The paper’s other authors are Heather Sullivan and Alexandria Barlowe.

A National Science Scholarship, the National Institutes of Health, and the JPB Foundation provided support for the study.

Approaching human cognition from many angles

In January, as the Charles River was starting to freeze over, Keith Murray and the other members of MIT’s men’s heavyweight crew team took to erging on the indoor rowing machine. For 80 minutes at a time, Murray endured one of the most grueling workouts of his college experience. To distract himself from the pain, he would talk with his teammates, covering everything from great philosophical ideas to personal coffee preferences.

For Murray, virtually any conversation is an opportunity to explore how people think and why they think in certain ways. Currently a senior double majoring in computation and cognition, and linguistics and philosophy, Murray tries to understand the human experience based on knowledge from all of these fields.

“I’m trying to blend different approaches together to understand the complexities of human cognition,” he says. “For example, from a physiological perspective, the brain is just billions of neurons firing all at once, but this hardly scratches the surface of cognition.”

Murray grew up in Corydon, Indiana, where he attended the Indiana Academy for Science, Mathematics, and Humanities during his junior year of high school. He was exposed to philosophy there, learning the ideas of Plato, Socrates, and Thomas Aquinas, to name a few. When looking at colleges, Murray became interested in MIT because he wanted to learn about human thought processes from different perspectives. “Coming to MIT, I knew I wanted to do something philosophical. But I wanted to also be on the more technical side of things,” he says.

Once on campus, Murray immediately pursued an opportunity through the Undergraduate Research Opportunity Program (UROP) in the Digital Humanities Lab. There he worked with language-processing technology to analyze gendered language in various novels, with the end goal of displaying the data for an online audience. He learned about the basic mathematical models used for analyzing and presenting data online, to study the social implications of linguistic phrases and expressions.

Murray also joined the Concourse learning community, which brought together different perspectives from the humanities, sciences, and math in a weekly seminar. “I was exposed to some excellent examples of how to do interdisciplinary work,” he recalls.

In the summer before his sophomore year, Murray took a position as a researcher in the Harnett Lab, where instead of working with novels, he was working with mice. Alongside postdoc Lucas Fisher, Murray trained mice to do navigational tasks using virtual reality equipment. His goal was to explore neural encoding in navigation, understanding why the mice behaved in certain ways after being shown certain stimuli on the screens. Spending time in the lab, Murray became increasingly interested in neuroscience and the biological components behind human thought processes.

He sought out other neuroscience-related research experiences, which led him to explore a SuperUROP project in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Working under Professor Nancy Lynch, he designed theoretical models of the retina using machine learning. Murray was excited to apply the techniques he learned in 9.40 (Introduction to Neural Computation) to address complex neurological problems. Murray considers this one of his most challenging research experiences, as the experience was entirely online.

“It was during the pandemic, so I had to learn a lot on my own; I couldn’t exactly do research in a lab. It was a big challenge, but at the end, I learned a lot and ended up getting a publication out of it,” he reflects.

This past semester, Murray has worked in the lab of Professor Ila Fiete in the McGovern Institute for Brain Research, constructing deep-learning models of animals performing navigational tasks. Through this UROP, which builds on his final project from Fiete’s class 9.49 (Neural Circuits for Cognition), Murray has been working to incorporate existing theoretical models of the hippocampus to investigate the intersection between artificial intelligence and neuroscience.

Reflecting on his varied research experiences, Murray says they have shown him new ways to explore the human brain from multiple perspectives, something he finds helpful as he tries to understand the complexity of human behavior.

Outside of his academic pursuits, Murray has continued to row with the crew team, where he walked on his first year. He sees rowing as a way to build up his strength, both physically and mentally. “When I’m doing my class work or I’m thinking about projects, I am using the same mental toughness that I developed during rowing,” he says. “That’s something I learned at MIT, to cultivate the dedication you put toward something. It’s all the same mental toughness whether you apply it to physical activities like rowing, or research projects.”

Looking ahead, Murray hopes to pursue a PhD in neuroscience, looking to find ways to incorporate his love of philosophy and human thought into his cognitive research. “I think there’s a lot more to do with neuroscience, especially with artificial intelligence. There are so many new technological developments happening right now,” he says.

Aging Brain Initiative awards fund five new ideas to study, fight neurodegeneration

Neurodegenerative diseases are defined by an increasingly widespread and debilitating death of nervous system cells, but they also share other grim characteristics: Their cause is rarely discernible and they have all eluded cures. To spur fresh, promising approaches and to encourage new experts and expertise to join the field, MIT’s Aging Brain Initiative (ABI) this month awarded five seed grants after a competition among labs across the Institute.

Founded in 2015 by nine MIT faculty members, the ABI promotes research, symposia, and related activities to advance fundamental insights that can lead to clinical progress against neurodegenerative conditions, such as Alzheimer’s disease, with an age-related onset. With an emphasis on spurring research at an early stage before it is established enough to earn more traditional funding, the ABI derives support from philanthropic gifts.

“Solving the mysteries of how health declines in the aging brain and turning that knowledge into effective tools, treatments, and technologies is of the utmost urgency given the millions of people around the world who suffer with no meaningful treatment options,” says ABI director and co-founder Li-Huei Tsai, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “We were very pleased that many groups across MIT were eager to contribute their expertise and creativity to that goal. From here, five teams will be able to begin testing their innovative ideas and the impact they could have.”

To address the clinical challenge of accurately assessing cognitive decline during Alzheimer’s disease progression and healthy aging, a team led by Thomas Heldt, associate professor of electrical and biomedical engineering in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Medical Engineering and Science, proposes to use artificial intelligence tools to bring diagnostics based on eye movements during cognitive tasks to everyday consumer electronics such as smartphones and tablets. By moving these capabilities to common at-home platforms, the team, which also includes EECS Associate Professor Vivian Sze, hopes to increase monitoring beyond what can only be intermittently achieved with high-end specialized equipment and dedicated staffing in specialists’ offices. The team will pilot their technology in a small study at Boston Medical Center in collaboration with neurosurgeon James Holsapple.

Institute Professor Ann Graybiel’s lab in the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research will test the hypothesis that mutations on a specific gene may lead to the early emergence of Alzheimer’s disease (AD) pathology in the striatum. That’s a a brain region crucial for motivation and movement that is directly and severely impacted by other neurodegenerative disorders including Parkinson’s and Huntington’s diseases, but that has largely been unstudied in Alzheimer’s. By editing the mutations into normal and AD-modeling mice, Research Scientist Ayano Matsushima and Graybiel hope to determine whether and how pathology, such as the accumulation of amyloid proteins, may result. Determining that could provide new insight into the progression of disease and introduce a new biomarker in a region that virtually all other studies have overlooked.

Numerous recent studies have highlighted a potential role for immune inflammation in Alzheimer’s disease. A team led by Gloria Choi, the Mark Hyman Jr. Associate Professor in BCS and The Picower Institute for Learning and Memory, will track one potential source of such activity by determining whether the brain’s meninges, which envelop the brain, becomes a means for immune cells activated by gut bacteria to circulate near the brain, where they may release signaling molecules that promote Alzheimer’s pathology. Working in mice, Choi’s lab will test whether such activity is prone to increase in Alzheimer’s and whether it contributes to disease.

A collaboration led by Peter Dedon, the Singapore Professor in MIT’s Department of Biological Engineering, will explore whether Alzheimer’s pathology is driven by dysregulation of transfer RNAs (tRNAs) and the dozens of natural tRNA modifications in the epitranscriptome, which play a key role in the process by which proteins are assembled based on genetic instructions. With Benjamin Wolozin of Boston University, Sherif Rashad of Tohoku University in Japan, and Thomas Begley of the State University of New York at Albany, Dedon will assess how the tRNA pool and epitranscriptome may differ in Alzheimer’s model mice and whether genetic instructions mistranslated because of tRNA dysregulation play a role in Alzheimer’s disease.

With her seed grant, Ritu Raman, the d’Arbeloff Assistant Professor of Mechanical Engineering, is launching an investigation of possible disruption of intercellular messages in amyotrophic lateral sclerosis (ALS), a terminal condition in which motor neuron causes loss of muscle control. Equipped with a new tool to finely sample interstitial fluid within tissues, Raman’s team will be able to monitor and compare cell-cell signaling in models of the junction between nerve and muscle. These models will be engineered from stem cells derived from patients with ALS. By studying biochemical signaling at the junction the lab hopes to discover new targets that could be therapeutically modified.

Major support for the seed grants, which provide each lab with $100,000, came from generous gifts by David Emmes SM ’76; Kathleen SM ’77, PhD ’86 and Miguel Octavio; the Estate of Margaret A. Ridge-Pappis, wife of the late James Pappis ScD ’59; the Marc Haas Foundation; and the family of former MIT President Paul Gray ’54, SM ’55, ScD ‘60, with additional funding from many annual fund donors to the Aging Brain Initiative Fund.

Study finds neurons that encode the outcomes of actions

When we make complex decisions, we have to take many factors into account. Some choices have a high payoff but carry potential risks; others are lower risk but may have a lower reward associated with them.

A new study from MIT sheds light on the part of the brain that helps us make these types of decisions. The research team found a group of neurons in the brain’s striatum that encodes information about the potential outcomes of different decisions. These cells become particularly active when a behavior leads a different outcome than what was expected, which the researchers believe helps the brain adapt to changing circumstances.

“A lot of this brain activity deals with surprising outcomes, because if an outcome is expected, there’s really nothing to be learned. What we see is that there’s a strong encoding of both unexpected rewards and unexpected negative outcomes,” says Bernard Bloem, a former MIT postdoc and one of the lead authors of the new study.

Impairments in this kind of decision-making are a hallmark of many neuropsychiatric disorders, especially anxiety and depression. The new findings suggest that slight disturbances in the activity of these striatal neurons could swing the brain into making impulsive decisions or becoming paralyzed with indecision, the researchers say.

Rafiq Huda, a former MIT postdoc, is also a lead author of the paper, which appears in Nature Communications. Ann Graybiel, an MIT Institute Professor and member of MIT’s McGovern Institute for Brain Research, is the senior author of the study.

Learning from experience

The striatum, located deep within the brain, is known to play a key role in making decisions that require evaluating outcomes of a particular action. In this study, the researchers wanted to learn more about the neural basis of how the brain makes cost-benefit decisions, in which a behavior can have a mixture of positive and negative outcomes.

Striosomes (red) appear and then disappear as the view moves deeper into the striatum. Video courtesy of the researchers

To study this kind of decision-making, the researchers trained mice to spin a wheel to the left or the right. With each turn, they would receive a combination of reward (sugary water) and negative outcome (a small puff of air). As the mice performed the task, they learned to maximize the delivery of rewards and to minimize the delivery of air puffs. However, over hundreds of trials, the researchers frequently changed the probabilities of getting the reward or the puff of air, so the mice would need to adjust their behavior.

As the mice learned to make these adjustments, the researchers recorded the activity of neurons in the striatum. They had expected to find neuronal activity that reflects which actions are good and need to be repeated, or bad and that need to be avoided. While some neurons did this, the researchers also found, to their surprise, that many neurons encoded details about the relationship between the actions and both types of outcomes.

The researchers found that these neurons responded more strongly when a behavior resulted in an unexpected outcome, that is, when turning the wheel in one direction produced the opposite outcome as it had in previous trials. These “error signals” for reward and penalty seem to help the brain figure out that it’s time to change tactics.

Most of the neurons that encode these error signals are found in the striosomes — clusters of neurons located in the striatum. Previous work has shown that striosomes send information to many other parts of the brain, including dopamine-producing regions and regions involved in planning movement.

“The striosomes seem to mostly keep track of what the actual outcomes are,” Bloem says. “The decision whether to do an action or not, which essentially requires integrating multiple outcomes, probably happens somewhere downstream in the brain.”

Making judgments

The findings could be relevant not only to mice learning a task, but also to many decisions that people have to make every day as they weigh the risks and benefits of each choice. Eating a big bowl of ice cream after dinner leads to immediate gratification, but it might contribute to weight gain or poor health. Deciding to have carrots instead will make you feel healthier, but you’ll miss out on the enjoyment of the sweet treat.

“From a value perspective, these can be considered equally good,” Bloem says. “What we find is that the striatum also knows why these are good, and it knows what are the benefits and the cost of each. In a way, the activity there reflects much more about the potential outcome than just how likely you are to choose it.”

This type of complex decision-making is often impaired in people with a variety of neuropsychiatric disorders, including anxiety, depression, schizophrenia, obsessive-compulsive disorder, and posttraumatic stress disorder. Drug abuse can also lead to impaired judgment and impulsivity.

“You can imagine that if things are set up this way, it wouldn’t be all that difficult to get mixed up about what is good and what is bad, because there are some neurons that fire when an outcome is good and they also fire when the outcome is bad,” Graybiel says. “Our ability to make our movements or our thoughts in what we call a normal way depends on those distinctions, and if they get blurred, it’s real trouble.”

The new findings suggest that behavioral therapy targeting the stage at which information about potential outcomes is encoded in the brain may help people who suffer from those disorders, the researchers say.

The research was funded by the National Institutes of Health/National Institute of Mental Health, the Saks Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, the Simons Foundation, the Nancy Lurie Marks Family Foundation, the National Eye Institute, the National Institute of Neurological Disease and Stroke, the National Science Foundation, the Simons Foundation Autism Research Initiative, and JSPS KAKENHI.

Setting carbon management in stone

Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

“If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

“That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, and Yogesh Surendranath, the Paul M Cook Career Development associate professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

“We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

“Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.”

Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

Directed evolution of biological carbon fixation

Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, Investigator at the Howard Hughes Medical Institute and the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster

than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

Q: What partners will you need to accelerate the development of your solutions?

A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

Strategies to reduce atmospheric methane

One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

Q: What is the problem you are trying to solve and why is it a “grand challenge”?

A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

Deploying versatile carbon capture technologies and storage at scale

There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

Q: What are the expected impacts of your proposed solution, both positive and negative?

A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help.

New MRI probe can reveal more of the brain’s inner workings

Using a novel probe for functional magnetic resonance imaging (fMRI), MIT biological engineers have devised a way to monitor individual populations of neurons and reveal how they interact with each other.

Similar to how the gears of a clock interact in specific ways to turn the clock’s hands, different parts of the brain interact to perform a variety of tasks, such as generating behavior or interpreting the world around us. The new MRI probe could potentially allow scientists to map those networks of interactions.

“With regular fMRI, we see the action of all the gears at once. But with our new technique, we can pick up individual gears that are defined by their relationship to the other gears, and that’s critical for building up a picture of the mechanism of the brain,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Using this technique, which involves genetically targeting the MRI probe to specific populations of cells in animal models, the researchers were able to identify neural populations involved in a circuit that responds to rewarding stimuli. The new MRI probe could also enable studies of many other brain circuits, the researchers say.

Jasanoff, who is also an associate investigator at the McGovern Institute, is the senior author of the study, which appears today in Nature Neuroscience. The lead authors of the paper are recent MIT PhD recipient Souparno Ghosh and former MIT research scientist Nan Li.

Tracing connections

Traditional fMRI imaging measures changes to blood flow in the brain, as a proxy for neural activity. When neurons receive signals from other neurons, it triggers an influx of calcium, which causes a diffusible gas called nitric oxide to be released. Nitric oxide acts in part as a vasodilator that increases blood flow to the area.

Imaging calcium directly can offer a more precise picture of brain activity, but that type of imaging usually requires fluorescent chemicals and invasive procedures. The MIT team wanted to develop a method that could work across the brain without that type of invasiveness.

“If we want to figure out how brain-wide networks of cells and brain-wide mechanisms function, we need something that can be detected deep in tissue and preferably across the entire brain at once,” Jasanoff says. “The way that we chose to do that in this study was to essentially hijack the molecular basis of fMRI itself.”

The researchers created a genetic probe, delivered by viruses, that codes for a protein that sends out a signal whenever the neuron is active. This protein, which the researchers called NOSTIC (nitric oxide synthase for targeting image contrast), is an engineered form of an enzyme called nitric oxide synthase. The NOSTIC protein can detect elevated calcium levels that arise during neural activity; it then generates nitric oxide, leading to an artificial fMRI signal that arises only from cells that contain NOSTIC.

The probe is delivered by a virus that is injected into a particular site, after which it travels along axons of neurons that connect to that site. That way, the researchers can label every neural population that feeds into a particular location.

“When we use this virus to deliver our probe in this way, it causes the probe to be expressed in the cells that provide input to the location where we put the virus,” Jasanoff says. “Then, by performing functional imaging of those cells, we can start to measure what makes input to that region take place, or what types of input arrive at that region.”

Turning the gears

In the new study, the researchers used their probe to label populations of neurons that project to the striatum, a region that is involved in planning movement and responding to reward. In rats, they were able to determine which neural populations send input to the striatum during or immediately following a rewarding stimulus — in this case, deep brain stimulation of the lateral hypothalamus, a brain center that is involved in appetite and motivation, among other functions.

One question that researchers have had about deep brain stimulation of the lateral hypothalamus is how wide-ranging the effects are. In this study, the MIT team showed that several neural populations, located in regions including the motor cortex and the entorhinal cortex, which is involved in memory, send input into the striatum following deep brain stimulation.

“It’s not simply input from the site of the deep brain stimulation or from the cells that carry dopamine. There are these other components, both distally and locally, that shape the response, and we can put our finger on them because of the use of this probe,” Jasanoff says.

During these experiments, neurons also generate regular fMRI signals, so in order to distinguish the signals that are coming specifically from the genetically altered neurons, the researchers perform each experiment twice: once with the probe on, and once following treatment with a drug that inhibits the probe. By measuring the difference in fMRI activity between these two conditions, they can determine how much activity is present in probe-containing cells specifically.

The researchers now hope to use this approach, which they call hemogenetics, to study other networks in the brain, beginning with an effort to identify some of the regions that receive input from the striatum following deep brain stimulation.

“One of the things that’s exciting about the approach that we’re introducing is that you can imagine applying the same tool at many sites in the brain and piecing together a network of interlocking gears, which consist of these input and output relationships,” Jasanoff says. “This can lead to a broad perspective on how the brain works as an integrated whole, at the level of neural populations.”

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.