A brain circuit in the thalamus helps us hold information in mind

As people age, their working memory often declines, making it more difficult to perform everyday tasks. One key brain region linked to this type of memory is the anterior thalamus, which is primarily involved in spatial memory — memory of our surroundings and how to navigate them.

In a study of mice, MIT researchers have identified a circuit in the anterior thalamus that is necessary for remembering how to navigate a maze. The researchers also found that this circuit is weakened in older mice, but enhancing its activity greatly improves their ability to run the maze correctly.

This region could offer a promising target for treatments that could help reverse memory loss in older people, without affecting other parts of the brain, the researchers say.

“By understanding how the thalamus controls cortical output, hopefully we could find more specific and druggable targets in this area, instead of generally modulating the prefrontal cortex, which has many different functions,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in the Proceedings of the National Academy of Sciences. Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, and Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, are the lead authors of the paper.

Spatial memory

The thalamus, a small structure located near the center of the brain, contributes to working memory and many other executive functions, such as planning and attention. Feng’s lab has recently been investigating a region of the thalamus known as the anterior thalamus, which has important roles in memory and spatial navigation.

Previous studies in mice have shown that damage to the anterior thalamus leads to impairments in spatial working memory. In humans, studies have revealed age-related decline in anterior thalamus activity, which is correlated with lower performance on spatial memory tasks.

The anterior thalamus is divided into three sections: ventral, dorsal, and medial. In a study published last year, Feng, Roy and Zhang studied the role of the anterodorsal (AD) thalamus and anteroventral (AV) thalamus in memory formation. They found that the AD thalamus is involved in creating mental maps of physical spaces, while the AV thalamus helps the brain to distinguish these memories from other memories of similar spaces.

In their new study, the researchers wanted to look more deeply at the AV thalamus, exploring its role in a spatial working memory task. To do that, they trained mice to run a simple T-shaped maze. At the beginning of each trial, the mice ran until they reached the T. One arm was blocked off, forcing them to run down the other arm. Then, the mice were placed in the maze again, with both arms open. The mice were rewarded if they chose the opposite arm from the first run. This meant that in order to make the correct decision, they had to remember which way they had turned on the previous run.

As the mice performed the task, the researchers used optogenetics to inhibit activity of either AV or AD neurons during three different parts of the task: the sample phase, which occurs during the first run; the delay phase, while they are waiting for the second run to begin; and the choice phase, when the mice make their decision which way to turn during the second run.

The researchers found that inhibiting AV neurons during the sample or choice phases had no effect on the mice’s performance, but when they suppressed AV activity during the delay phase, which lasted 10 seconds or longer, the mice performed much worse on the task.

This suggests that the AV neurons are most important for keeping information in mind while it is needed for a task. In contrast, inhibiting the AD neurons disrupted performance during the sample phase but had little effect during the delay phase. This finding was consistent with the research team’s earlier study showing that AD neurons are involved in forming memories of a physical space.

“The anterior thalamus in general is a spatial learning region, but the ventral neurons seem to be needed in this maintenance period, during this short delay,” Roy says. “Now we have two subdivisions within the anterior thalamus: one that seems to help with contextual learning and the other that actually helps with holding this information.”

Age-related decline

The researchers then tested the effects of age on this circuit. They found that older mice (14 months) performed worse on the T-maze task and their AV neurons were less excitable. However, when the researchers artificially stimulated those neurons, the mice’s performance on the task dramatically improved.

Another way to enhance performance in this memory task is to stimulate the prefrontal cortex, which also undergoes age-related decline. However, activating the prefrontal cortex also increases measures of anxiety in the mice, the researchers found.

“If we directly activate neurons in medial prefrontal cortex, it will also elicit anxiety-related behavior, but this will not happen during AV activation,” Zhang says. “That is an advantage of activating AV compared to prefrontal cortex.”

If a noninvasive or minimally invasive technology could be used to stimulate those neurons in the human brain, it could offer a way to help prevent age-related memory decline, the researchers say. They are now planning to perform single-cell RNA sequencing of neurons of the anterior thalamus to find genetic signatures that could be used to identify cells that would make the best targets.

The research was funded, in part, by the Stanley Center for Psychiatric Research at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, and the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT.

Circuit that focuses attention brings in wide array of inputs

In a new brain-wide circuit tracing study, scientists at MIT’s Picower Institute for Learning and Memory focused selective attention on a circuit that governs, fittingly enough, selective attention. The comprehensive maps they produced illustrate how broadly the mammalian brain incorporates and integrates information to focus its sensory resources on its goals.

Working in mice, the team traced thousands of inputs into the circuit, a communication loop between the anterior cingulate cortex (ACC) and the lateral posterior (LP) thalamus. In primates the LP is called the pulvinar. Studies in humans and nonhuman primates have indicated that the byplay of these two regions is critical for brain functions like being able to focus on an object of interest in a crowded scene, says study co-lead author Yi Ning Leow, a graduate student in the lab of senior author Mriganka Sur, the Newton Professor in MIT’s Department of Brain and Cognitive Sciences. Research has implicated dysfunction in the circuit in attention-affecting disorders such as autism and attention deficit/hyperactivity disorder.

The new study in the Journal of Comparative Neurology extends what’s known about the circuit by detailing it in mice, Leow says, importantly showing that the mouse circuit is closely analogous to the primate version even if the LP is proportionately smaller and less evolved than the pulvinar.

“In these rodent models we were able to find very similar circuits,” Leow says. “So we can possibly study these higher-order functions in mice as well. We have a lot more genetic tools in mice so we are better able to look at this circuit.”

The study, also co-led by former MIT undergraduate Blake Zhou, therefore provides a detailed roadmap in the experimentally accessible mouse model for understanding how the ACC and LP cooperate to produce selective attention. For instance, now that Leow and Zhou have located all the inputs that are wired into the circuit, Leow is tapping into those feeds to eavesdrop on the information they are carrying. Meanwhile, she is correlating that information flow with behavior.

“This study lays the groundwork for understanding one of the most important, yet most elusive, components of brain function, namely our ability to selectively attend to one thing out of several, as well as switch attention,” Sur says.

Using virally mediated circuit-tracing techniques pioneered by co-author Ian Wickersham, principal research scientist in brain and cognitive sciences and the McGovern Institute for Brain Research at MIT, the team found distinct sources of input for the ACC and the LP. Generally speaking, the detailed study finds that the majority of inputs to the ACC were from frontal cortex areas that typically govern goal-directed planning, and from higher visual areas. The bulk of inputs to the LP, meanwhile, were from deeper regions capable of providing context such as the mouse’s needs, location and spatial cues, information about movement, and general information from a mix of senses.

So even though focusing attention might seem like a matter of controlling the senses, Leow says, the circuit pulls in a lot of other information as well.

“We’re seeing that it’s not just sensory — there are so many inputs that are coming from non-sensory areas as well, both sub-cortically and cortically,” she says. “It seems to be integrating a lot of different aspects that might relate to the behavioral state of the animal at a given time. It provides a way to provide a lot of internal and special context for that sensory information.”

Given the distinct sets of inputs to each region, the ACC may be tasked with focusing attention on a desired object, while the LP is modulating how the ACC goes about making those computations, accounting for what’s going on both inside and outside the animal. Decoding just what that incoming contextual information is, and what the LP tells the ACC, are the key next steps, Leow says. Another clear set of questions the study raises are what are the circuit’s outputs. In other words, after it integrates all this information, what does it do with it?

The paper’s other authors are Heather Sullivan and Alexandria Barlowe.

A National Science Scholarship, the National Institutes of Health, and the JPB Foundation provided support for the study.

Approaching human cognition from many angles

In January, as the Charles River was starting to freeze over, Keith Murray and the other members of MIT’s men’s heavyweight crew team took to erging on the indoor rowing machine. For 80 minutes at a time, Murray endured one of the most grueling workouts of his college experience. To distract himself from the pain, he would talk with his teammates, covering everything from great philosophical ideas to personal coffee preferences.

For Murray, virtually any conversation is an opportunity to explore how people think and why they think in certain ways. Currently a senior double majoring in computation and cognition, and linguistics and philosophy, Murray tries to understand the human experience based on knowledge from all of these fields.

“I’m trying to blend different approaches together to understand the complexities of human cognition,” he says. “For example, from a physiological perspective, the brain is just billions of neurons firing all at once, but this hardly scratches the surface of cognition.”

Murray grew up in Corydon, Indiana, where he attended the Indiana Academy for Science, Mathematics, and Humanities during his junior year of high school. He was exposed to philosophy there, learning the ideas of Plato, Socrates, and Thomas Aquinas, to name a few. When looking at colleges, Murray became interested in MIT because he wanted to learn about human thought processes from different perspectives. “Coming to MIT, I knew I wanted to do something philosophical. But I wanted to also be on the more technical side of things,” he says.

Once on campus, Murray immediately pursued an opportunity through the Undergraduate Research Opportunity Program (UROP) in the Digital Humanities Lab. There he worked with language-processing technology to analyze gendered language in various novels, with the end goal of displaying the data for an online audience. He learned about the basic mathematical models used for analyzing and presenting data online, to study the social implications of linguistic phrases and expressions.

Murray also joined the Concourse learning community, which brought together different perspectives from the humanities, sciences, and math in a weekly seminar. “I was exposed to some excellent examples of how to do interdisciplinary work,” he recalls.

In the summer before his sophomore year, Murray took a position as a researcher in the Harnett Lab, where instead of working with novels, he was working with mice. Alongside postdoc Lucas Fisher, Murray trained mice to do navigational tasks using virtual reality equipment. His goal was to explore neural encoding in navigation, understanding why the mice behaved in certain ways after being shown certain stimuli on the screens. Spending time in the lab, Murray became increasingly interested in neuroscience and the biological components behind human thought processes.

He sought out other neuroscience-related research experiences, which led him to explore a SuperUROP project in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Working under Professor Nancy Lynch, he designed theoretical models of the retina using machine learning. Murray was excited to apply the techniques he learned in 9.40 (Introduction to Neural Computation) to address complex neurological problems. Murray considers this one of his most challenging research experiences, as the experience was entirely online.

“It was during the pandemic, so I had to learn a lot on my own; I couldn’t exactly do research in a lab. It was a big challenge, but at the end, I learned a lot and ended up getting a publication out of it,” he reflects.

This past semester, Murray has worked in the lab of Professor Ila Fiete in the McGovern Institute for Brain Research, constructing deep-learning models of animals performing navigational tasks. Through this UROP, which builds on his final project from Fiete’s class 9.49 (Neural Circuits for Cognition), Murray has been working to incorporate existing theoretical models of the hippocampus to investigate the intersection between artificial intelligence and neuroscience.

Reflecting on his varied research experiences, Murray says they have shown him new ways to explore the human brain from multiple perspectives, something he finds helpful as he tries to understand the complexity of human behavior.

Outside of his academic pursuits, Murray has continued to row with the crew team, where he walked on his first year. He sees rowing as a way to build up his strength, both physically and mentally. “When I’m doing my class work or I’m thinking about projects, I am using the same mental toughness that I developed during rowing,” he says. “That’s something I learned at MIT, to cultivate the dedication you put toward something. It’s all the same mental toughness whether you apply it to physical activities like rowing, or research projects.”

Looking ahead, Murray hopes to pursue a PhD in neuroscience, looking to find ways to incorporate his love of philosophy and human thought into his cognitive research. “I think there’s a lot more to do with neuroscience, especially with artificial intelligence. There are so many new technological developments happening right now,” he says.

Aging Brain Initiative awards fund five new ideas to study, fight neurodegeneration

Neurodegenerative diseases are defined by an increasingly widespread and debilitating death of nervous system cells, but they also share other grim characteristics: Their cause is rarely discernible and they have all eluded cures. To spur fresh, promising approaches and to encourage new experts and expertise to join the field, MIT’s Aging Brain Initiative (ABI) this month awarded five seed grants after a competition among labs across the Institute.

Founded in 2015 by nine MIT faculty members, the ABI promotes research, symposia, and related activities to advance fundamental insights that can lead to clinical progress against neurodegenerative conditions, such as Alzheimer’s disease, with an age-related onset. With an emphasis on spurring research at an early stage before it is established enough to earn more traditional funding, the ABI derives support from philanthropic gifts.

“Solving the mysteries of how health declines in the aging brain and turning that knowledge into effective tools, treatments, and technologies is of the utmost urgency given the millions of people around the world who suffer with no meaningful treatment options,” says ABI director and co-founder Li-Huei Tsai, the Picower Professor of Neuroscience in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “We were very pleased that many groups across MIT were eager to contribute their expertise and creativity to that goal. From here, five teams will be able to begin testing their innovative ideas and the impact they could have.”

To address the clinical challenge of accurately assessing cognitive decline during Alzheimer’s disease progression and healthy aging, a team led by Thomas Heldt, associate professor of electrical and biomedical engineering in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Medical Engineering and Science, proposes to use artificial intelligence tools to bring diagnostics based on eye movements during cognitive tasks to everyday consumer electronics such as smartphones and tablets. By moving these capabilities to common at-home platforms, the team, which also includes EECS Associate Professor Vivian Sze, hopes to increase monitoring beyond what can only be intermittently achieved with high-end specialized equipment and dedicated staffing in specialists’ offices. The team will pilot their technology in a small study at Boston Medical Center in collaboration with neurosurgeon James Holsapple.

Institute Professor Ann Graybiel’s lab in the Department of Brain and Cognitive Sciences (BCS) and the McGovern Institute for Brain Research will test the hypothesis that mutations on a specific gene may lead to the early emergence of Alzheimer’s disease (AD) pathology in the striatum. That’s a a brain region crucial for motivation and movement that is directly and severely impacted by other neurodegenerative disorders including Parkinson’s and Huntington’s diseases, but that has largely been unstudied in Alzheimer’s. By editing the mutations into normal and AD-modeling mice, Research Scientist Ayano Matsushima and Graybiel hope to determine whether and how pathology, such as the accumulation of amyloid proteins, may result. Determining that could provide new insight into the progression of disease and introduce a new biomarker in a region that virtually all other studies have overlooked.

Numerous recent studies have highlighted a potential role for immune inflammation in Alzheimer’s disease. A team led by Gloria Choi, the Mark Hyman Jr. Associate Professor in BCS and The Picower Institute for Learning and Memory, will track one potential source of such activity by determining whether the brain’s meninges, which envelop the brain, becomes a means for immune cells activated by gut bacteria to circulate near the brain, where they may release signaling molecules that promote Alzheimer’s pathology. Working in mice, Choi’s lab will test whether such activity is prone to increase in Alzheimer’s and whether it contributes to disease.

A collaboration led by Peter Dedon, the Singapore Professor in MIT’s Department of Biological Engineering, will explore whether Alzheimer’s pathology is driven by dysregulation of transfer RNAs (tRNAs) and the dozens of natural tRNA modifications in the epitranscriptome, which play a key role in the process by which proteins are assembled based on genetic instructions. With Benjamin Wolozin of Boston University, Sherif Rashad of Tohoku University in Japan, and Thomas Begley of the State University of New York at Albany, Dedon will assess how the tRNA pool and epitranscriptome may differ in Alzheimer’s model mice and whether genetic instructions mistranslated because of tRNA dysregulation play a role in Alzheimer’s disease.

With her seed grant, Ritu Raman, the d’Arbeloff Assistant Professor of Mechanical Engineering, is launching an investigation of possible disruption of intercellular messages in amyotrophic lateral sclerosis (ALS), a terminal condition in which motor neuron causes loss of muscle control. Equipped with a new tool to finely sample interstitial fluid within tissues, Raman’s team will be able to monitor and compare cell-cell signaling in models of the junction between nerve and muscle. These models will be engineered from stem cells derived from patients with ALS. By studying biochemical signaling at the junction the lab hopes to discover new targets that could be therapeutically modified.

Major support for the seed grants, which provide each lab with $100,000, came from generous gifts by David Emmes SM ’76; Kathleen SM ’77, PhD ’86 and Miguel Octavio; the Estate of Margaret A. Ridge-Pappis, wife of the late James Pappis ScD ’59; the Marc Haas Foundation; and the family of former MIT President Paul Gray ’54, SM ’55, ScD ‘60, with additional funding from many annual fund donors to the Aging Brain Initiative Fund.

Study finds neurons that encode the outcomes of actions

When we make complex decisions, we have to take many factors into account. Some choices have a high payoff but carry potential risks; others are lower risk but may have a lower reward associated with them.

A new study from MIT sheds light on the part of the brain that helps us make these types of decisions. The research team found a group of neurons in the brain’s striatum that encodes information about the potential outcomes of different decisions. These cells become particularly active when a behavior leads a different outcome than what was expected, which the researchers believe helps the brain adapt to changing circumstances.

“A lot of this brain activity deals with surprising outcomes, because if an outcome is expected, there’s really nothing to be learned. What we see is that there’s a strong encoding of both unexpected rewards and unexpected negative outcomes,” says Bernard Bloem, a former MIT postdoc and one of the lead authors of the new study.

Impairments in this kind of decision-making are a hallmark of many neuropsychiatric disorders, especially anxiety and depression. The new findings suggest that slight disturbances in the activity of these striatal neurons could swing the brain into making impulsive decisions or becoming paralyzed with indecision, the researchers say.

Rafiq Huda, a former MIT postdoc, is also a lead author of the paper, which appears in Nature Communications. Ann Graybiel, an MIT Institute Professor and member of MIT’s McGovern Institute for Brain Research, is the senior author of the study.

Learning from experience

The striatum, located deep within the brain, is known to play a key role in making decisions that require evaluating outcomes of a particular action. In this study, the researchers wanted to learn more about the neural basis of how the brain makes cost-benefit decisions, in which a behavior can have a mixture of positive and negative outcomes.

Striosomes (red) appear and then disappear as the view moves deeper into the striatum. Video courtesy of the researchers

To study this kind of decision-making, the researchers trained mice to spin a wheel to the left or the right. With each turn, they would receive a combination of reward (sugary water) and negative outcome (a small puff of air). As the mice performed the task, they learned to maximize the delivery of rewards and to minimize the delivery of air puffs. However, over hundreds of trials, the researchers frequently changed the probabilities of getting the reward or the puff of air, so the mice would need to adjust their behavior.

As the mice learned to make these adjustments, the researchers recorded the activity of neurons in the striatum. They had expected to find neuronal activity that reflects which actions are good and need to be repeated, or bad and that need to be avoided. While some neurons did this, the researchers also found, to their surprise, that many neurons encoded details about the relationship between the actions and both types of outcomes.

The researchers found that these neurons responded more strongly when a behavior resulted in an unexpected outcome, that is, when turning the wheel in one direction produced the opposite outcome as it had in previous trials. These “error signals” for reward and penalty seem to help the brain figure out that it’s time to change tactics.

Most of the neurons that encode these error signals are found in the striosomes — clusters of neurons located in the striatum. Previous work has shown that striosomes send information to many other parts of the brain, including dopamine-producing regions and regions involved in planning movement.

“The striosomes seem to mostly keep track of what the actual outcomes are,” Bloem says. “The decision whether to do an action or not, which essentially requires integrating multiple outcomes, probably happens somewhere downstream in the brain.”

Making judgments

The findings could be relevant not only to mice learning a task, but also to many decisions that people have to make every day as they weigh the risks and benefits of each choice. Eating a big bowl of ice cream after dinner leads to immediate gratification, but it might contribute to weight gain or poor health. Deciding to have carrots instead will make you feel healthier, but you’ll miss out on the enjoyment of the sweet treat.

“From a value perspective, these can be considered equally good,” Bloem says. “What we find is that the striatum also knows why these are good, and it knows what are the benefits and the cost of each. In a way, the activity there reflects much more about the potential outcome than just how likely you are to choose it.”

This type of complex decision-making is often impaired in people with a variety of neuropsychiatric disorders, including anxiety, depression, schizophrenia, obsessive-compulsive disorder, and posttraumatic stress disorder. Drug abuse can also lead to impaired judgment and impulsivity.

“You can imagine that if things are set up this way, it wouldn’t be all that difficult to get mixed up about what is good and what is bad, because there are some neurons that fire when an outcome is good and they also fire when the outcome is bad,” Graybiel says. “Our ability to make our movements or our thoughts in what we call a normal way depends on those distinctions, and if they get blurred, it’s real trouble.”

The new findings suggest that behavioral therapy targeting the stage at which information about potential outcomes is encoded in the brain may help people who suffer from those disorders, the researchers say.

The research was funded by the National Institutes of Health/National Institute of Mental Health, the Saks Kavanaugh Foundation, the William N. and Bernice E. Bumpus Foundation, the Simons Foundation, the Nancy Lurie Marks Family Foundation, the National Eye Institute, the National Institute of Neurological Disease and Stroke, the National Science Foundation, the Simons Foundation Autism Research Initiative, and JSPS KAKENHI.

Setting carbon management in stone

Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

“If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

“That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, and Yogesh Surendranath, the Paul M Cook Career Development associate professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

“We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

“Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.”

Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

Directed evolution of biological carbon fixation

Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT, Investigator at the Howard Hughes Medical Institute and the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster

than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

Q: What partners will you need to accelerate the development of your solutions?

A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

Strategies to reduce atmospheric methane

One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

Q: What is the problem you are trying to solve and why is it a “grand challenge”?

A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

Deploying versatile carbon capture technologies and storage at scale

There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

Q: What are the expected impacts of your proposed solution, both positive and negative?

A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help.

New MRI probe can reveal more of the brain’s inner workings

Using a novel probe for functional magnetic resonance imaging (fMRI), MIT biological engineers have devised a way to monitor individual populations of neurons and reveal how they interact with each other.

Similar to how the gears of a clock interact in specific ways to turn the clock’s hands, different parts of the brain interact to perform a variety of tasks, such as generating behavior or interpreting the world around us. The new MRI probe could potentially allow scientists to map those networks of interactions.

“With regular fMRI, we see the action of all the gears at once. But with our new technique, we can pick up individual gears that are defined by their relationship to the other gears, and that’s critical for building up a picture of the mechanism of the brain,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering.

Using this technique, which involves genetically targeting the MRI probe to specific populations of cells in animal models, the researchers were able to identify neural populations involved in a circuit that responds to rewarding stimuli. The new MRI probe could also enable studies of many other brain circuits, the researchers say.

Jasanoff, who is also an associate investigator at the McGovern Institute, is the senior author of the study, which appears today in Nature Neuroscience. The lead authors of the paper are recent MIT PhD recipient Souparno Ghosh and former MIT research scientist Nan Li.

Tracing connections

Traditional fMRI imaging measures changes to blood flow in the brain, as a proxy for neural activity. When neurons receive signals from other neurons, it triggers an influx of calcium, which causes a diffusible gas called nitric oxide to be released. Nitric oxide acts in part as a vasodilator that increases blood flow to the area.

Imaging calcium directly can offer a more precise picture of brain activity, but that type of imaging usually requires fluorescent chemicals and invasive procedures. The MIT team wanted to develop a method that could work across the brain without that type of invasiveness.

“If we want to figure out how brain-wide networks of cells and brain-wide mechanisms function, we need something that can be detected deep in tissue and preferably across the entire brain at once,” Jasanoff says. “The way that we chose to do that in this study was to essentially hijack the molecular basis of fMRI itself.”

The researchers created a genetic probe, delivered by viruses, that codes for a protein that sends out a signal whenever the neuron is active. This protein, which the researchers called NOSTIC (nitric oxide synthase for targeting image contrast), is an engineered form of an enzyme called nitric oxide synthase. The NOSTIC protein can detect elevated calcium levels that arise during neural activity; it then generates nitric oxide, leading to an artificial fMRI signal that arises only from cells that contain NOSTIC.

The probe is delivered by a virus that is injected into a particular site, after which it travels along axons of neurons that connect to that site. That way, the researchers can label every neural population that feeds into a particular location.

“When we use this virus to deliver our probe in this way, it causes the probe to be expressed in the cells that provide input to the location where we put the virus,” Jasanoff says. “Then, by performing functional imaging of those cells, we can start to measure what makes input to that region take place, or what types of input arrive at that region.”

Turning the gears

In the new study, the researchers used their probe to label populations of neurons that project to the striatum, a region that is involved in planning movement and responding to reward. In rats, they were able to determine which neural populations send input to the striatum during or immediately following a rewarding stimulus — in this case, deep brain stimulation of the lateral hypothalamus, a brain center that is involved in appetite and motivation, among other functions.

One question that researchers have had about deep brain stimulation of the lateral hypothalamus is how wide-ranging the effects are. In this study, the MIT team showed that several neural populations, located in regions including the motor cortex and the entorhinal cortex, which is involved in memory, send input into the striatum following deep brain stimulation.

“It’s not simply input from the site of the deep brain stimulation or from the cells that carry dopamine. There are these other components, both distally and locally, that shape the response, and we can put our finger on them because of the use of this probe,” Jasanoff says.

During these experiments, neurons also generate regular fMRI signals, so in order to distinguish the signals that are coming specifically from the genetically altered neurons, the researchers perform each experiment twice: once with the probe on, and once following treatment with a drug that inhibits the probe. By measuring the difference in fMRI activity between these two conditions, they can determine how much activity is present in probe-containing cells specifically.

The researchers now hope to use this approach, which they call hemogenetics, to study other networks in the brain, beginning with an effort to identify some of the regions that receive input from the striatum following deep brain stimulation.

“One of the things that’s exciting about the approach that we’re introducing is that you can imagine applying the same tool at many sites in the brain and piecing together a network of interlocking gears, which consist of these input and output relationships,” Jasanoff says. “This can lead to a broad perspective on how the brain works as an integrated whole, at the level of neural populations.”

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Singing in the brain

Press Mentions

For the first time, MIT neuroscientists have identified a population of neurons in the human brain that lights up when we hear singing, but not other types of music.

These neurons, found in the auditory cortex, appear to respond to the specific combination of voice and music, but not to either regular speech or instrumental music. Exactly what they are doing is unknown and will require more work to uncover, the researchers say.

“The work provides evidence for relatively fine-grained segregation of function within the auditory cortex, in a way that aligns with an intuitive distinction within music,” says Sam Norman-Haignere, a former MIT postdoc who is now an assistant professor of neuroscience at the University of Rochester Medical Center.

The work builds on a 2015 study in which the same research team used functional magnetic resonance imaging (fMRI) to identify a population of neurons in the brain’s auditory cortex that responds specifically to music. In the new work, the researchers used recordings of electrical activity taken at the surface of the brain, which gave them much more precise information than fMRI.

“There’s one population of neurons that responds to singing, and then very nearby is another population of neurons that responds broadly to lots of music. At the scale of fMRI, they’re so close that you can’t disentangle them, but with intracranial recordings, we get additional resolution, and that’s what we believe allowed us to pick them apart,” says Norman-Haignere.

Norman-Haignere is the lead author of the study, which appears today in the journal Current Biology. Josh McDermott, an associate professor of brain and cognitive sciences, and Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience, both members of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds and Machines (CBMM), are the senior authors of the study.

Neural recordings

In their 2015 study, the researchers used fMRI to scan the brains of participants as they listened to a collection of 165 sounds, including different types of speech and music, as well as everyday sounds such as finger tapping or a dog barking. For that study, the researchers devised a novel method of analyzing the fMRI data, which allowed them to identify six neural populations with different response patterns, including the music-selective population and another population that responds selectively to speech.

In the new study, the researchers hoped to obtain higher-resolution data using a technique known as electrocorticography (ECoG), which allows electrical activity to be recorded by electrodes placed inside the skull. This offers a much more precise picture of electrical activity in the brain compared to fMRI, which measures blood flow in the brain as a proxy of neuron activity.

“With most of the methods in human cognitive neuroscience, you can’t see the neural representations,” Kanwisher says. “Most of the kind of data we can collect can tell us that here’s a piece of brain that does something, but that’s pretty limited. We want to know what’s represented in there.”

Electrocorticography cannot be typically be performed in humans because it is an invasive procedure, but it is often used to monitor patients with epilepsy who are about to undergo surgery to treat their seizures. Patients are monitored over several days so that doctors can determine where their seizures are originating before operating. During that time, if patients agree, they can participate in studies that involve measuring their brain activity while performing certain tasks. For this study, the MIT team was able to gather data from 15 participants over several years.

For those participants, the researchers played the same set of 165 sounds that they used in the earlier fMRI study. The location of each patient’s electrodes was determined by their surgeons, so some did not pick up any responses to auditory input, but many did. Using a novel statistical analysis that they developed, the researchers were able to infer the types of neural populations that produced the data that were recorded by each electrode.

“When we applied this method to this data set, this neural response pattern popped out that only responded to singing,” Norman-Haignere says. “This was a finding we really didn’t expect, so it very much justifies the whole point of the approach, which is to reveal potentially novel things you might not think to look for.”

That song-specific population of neurons had very weak responses to either speech or instrumental music, and therefore is distinct from the music- and speech-selective populations identified in their 2015 study.

Music in the brain

In the second part of their study, the researchers devised a mathematical method to combine the data from the intracranial recordings with the fMRI data from their 2015 study. Because fMRI can cover a much larger portion of the brain, this allowed them to determine more precisely the locations of the neural populations that respond to singing.

“This way of combining ECoG and fMRI is a significant methodological advance,” McDermott says. “A lot of people have been doing ECoG over the past 10 or 15 years, but it’s always been limited by this issue of the sparsity of the recordings. Sam is really the first person who figured out how to combine the improved resolution of the electrode recordings with fMRI data to get better localization of the overall responses.”

The song-specific hotspot that they found is located at the top of the temporal lobe, near regions that are selective for language and music. That location suggests that the song-specific population may be responding to features such as the perceived pitch, or the interaction between words and perceived pitch, before sending information to other parts of the brain for further processing, the researchers say.

The researchers now hope to learn more about what aspects of singing drive the responses of these neurons. They are also working with MIT Professor Rebecca Saxe’s lab to study whether infants have music-selective areas, in hopes of learning more about when and how these brain regions develop.

The research was funded by the National Institutes of Health, the U.S. Army Research Office, the National Science Foundation, the NSF Science and Technology Center for Brains, Minds, and Machines, the Fondazione Neurone, the Howard Hughes Medical Institute, and the Kristin R. Pressman and Jessica J. Pourian ’13 Fund at MIT.

On a mission to alleviate chronic pain

About 50 million Americans suffer from chronic pain, which interferes with their daily life, social interactions, and ability to work. MIT Professor Fan Wang wants to develop new ways to help relieve that pain, by studying and potentially modifying the brain’s own pain control mechanisms.

Her recent work has identified an “off switch” for pain, located in the brain’s amygdala. She hopes that finding ways to control this switch could lead to new treatments for chronic pain.

“Chronic pain is a major societal issue,” Wang says. “By studying pain-suppression neurons in the brain’s central amygdala, I hope to create a new therapeutic approach for alleviating pain.”

Wang, who joined the MIT faculty in January 2021, is also the leader of a new initiative at the McGovern Institute for Brain Research that is studying drug addiction, with the goal of developing more effective treatments for addiction.

“Opioid prescription for chronic pain is a major contributor to the opioid epidemic. With the Covid pandemic, I think addiction and overdose are becoming worse. People are more anxious, and they seek drugs to alleviate such mental pain,” Wang says. “As scientists, it’s our duty to tackle this problem.”

Sensory circuits

Wang, who grew up in Beijing, describes herself as “a nerdy child” who loved books and math. In high school, she took part in science competitions, then went on to study biology at Tsinghua University. She arrived in the United States in 1993 to begin her PhD at Columbia University. There, she worked on tracing the connection patterns of olfactory receptor neurons in the lab of Richard Axel, who later won the Nobel Prize for his discoveries of odorant receptors and how the olfactory system is organized.

After finishing her PhD, Wang decided to switch gears. As a postdoc at the University of California at San Francisco and then Stanford University, she began studying how the brain perceives touch.

In 2003, Wang joined the faculty at Duke University School of Medicine. There, she began developing techniques to study the brain circuits that underlie the sense of touch, tracing circuits that carry sensory information from the whiskers of mice to the brain. She also studied how the brain integrates movements of touch organs with signals of sensory stimuli to generate perception (such as using stretching movements to sense elasticity).

As she pursued her sensory perception studies, Wang became interested in studying pain perception, but she felt she needed to develop new techniques to tackle it. While at Duke, she invented a technique called CANE (capturing activated neural ensembles), which can identify networks of neurons that are activated by a particular stimulus.

Using this approach in mice, she identified neurons that become active in response to pain, but so many neurons across the brain were activated that it didn’t offer much useful information. As a way to indirectly get at how the brain controls pain, she decided to use CANE to explore the effects of drugs used for general anesthesia. During general anesthesia, drugs render a patient unconscious, but Wang hypothesized that the drugs might also shut off pain perception.

“At that time, it was just a wild idea,” Wang recalls. “I thought there may be other mechanisms — that instead of just a loss of consciousness, anesthetics may do something to the brain that actually turns pain off.”

Support for the existence of an “off switch” for pain came from the observation that wounded soldiers on a battlefield can continue to fight, essentially blocking out pain despite their injuries.

In a study of mice treated with anesthesia drugs, Wang discovered that the brain does have this kind of switch, in an unexpected location: the amygdala, which is involved in regulating emotion. She showed that this cluster of neurons can turn off pain when activated, and when it is suppressed, mice become highly sensitive to ordinary gentle touch.

“There’s a baseline level of activity that makes the animals feel normal, and when you activate these neurons, they’ll feel less pain. When you silence them, they’ll feel more pain,” Wang says.

Turning off pain

That finding, which Wang reported in 2020, raised the possibility of somehow modulating that switch in humans to try to treat chronic pain. This is a long-term goal of Wang’s, but more work is required to achieve it, she says. Currently her lab is working on analyzing the RNA expression patterns of the neurons in the cluster she identified. They also are measuring the neurons’ electrical activity and how they interact with other neurons in the brain, in hopes of identifying circuits that could be targeted to tamp down the perception of pain.

One way of modulating these circuits could be to use deep brain stimulation, which involves implanting electrodes in certain areas of the brain. Focused ultrasound, which is still in early stages of development and does not require surgery, could be a less invasive alternative.

Another approach Wang is interested in exploring is pairing brain stimulation with a context such as looking at a smartphone app. This kind of pairing could help train the brain to shut off pain using the app, without the need for the original stimulation (deep brain stimulation or ultrasound).

“Maybe you don’t need to constantly stimulate the brain. You may just need to reactivate it with a context,” Wang says. “After a while you would probably need to be restimulated, or reconditioned, but at least you have a longer window where you don’t need to go to the hospital for stimulation, and you just need to use a context.”

Wang, who was drawn to MIT in part by its focus on fostering interdisciplinary collaborations, is now working with several other McGovern Institute members who are taking different angles to try to figure out how the brain generates the state of craving that occurs in drug addiction, including opioid addiction.

“We’re going to focus on trying to understand this craving state: how it’s created in the brain and how can we sort of erase that trace in the brain, or at least control it. And then you can neuromodulate it in real time, for example, and give people a chance to get back their control,” she says.