Dendrites may help neurons perform complicated calculations

Within the human brain, neurons perform complex calculations on information they receive. Researchers at MIT have now demonstrated how dendrites — branch-like extensions that protrude from neurons — help to perform those computations.

The researchers found that within a single neuron, different types of dendrites receive input from distinct parts of the brain, and process it in different ways. These differences may help neurons to integrate a variety of inputs and generate an appropriate response, the researchers say.

In the neurons that the researchers examined in this study, it appears that this dendritic processing helps cells to take in visual information and combine it with motor feedback, in a circuit that is involved in navigation and planning movement.

“Our hypothesis is that these neurons have the ability to pick out specific features and landmarks in the visual environment, and combine them with information about running speed, where I’m going, and when I’m going to start, to move toward a goal position,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Mathieu Lafourcade, a former MIT postdoc, is the lead author of the paper, which appears today in Neuron.

Complex calculations

Any given neuron can have dozens of dendrites, which receive synaptic input from other neurons. Neuroscientists have hypothesized that these dendrites can act as compartments that perform their own computations on incoming information before sending the results to the body of the neuron, which integrates all these signals to generate an output.

Previous research has shown that dendrites can amplify incoming signals using specialized proteins called NMDA receptors. These are voltage-sensitive neurotransmitter receptors that are dependent on the activity of other receptors called AMPA receptors. When a dendrite receives many incoming signals through AMPA receptors at the same time, the threshold to activate nearby NMDA receptors is reached, creating an extra burst of current.

This phenomenon, known as supralinearity, is believed to help neurons distinguish between inputs that arrive close together or farther apart in time or space, Harnett says.

In the new study, the MIT researchers wanted to determine whether different types of inputs are targeted specifically to different types of dendrites, and if so, how that would affect the computations performed by those neurons. They focused on a population of neurons called pyramidal cells, the principal output neurons of the cortex, which have several different types of dendrites. Basal dendrites extend below the body of the neuron, apical oblique dendrites extend from a trunk that travels up from the body, and tuft dendrites are located at the top of the trunk.

Harnett and his colleagues chose a part of the brain called the retrosplenial cortex (RSC) for their studies because it is a good model for association cortex — the type of brain cortex used for complex functions such as planning, communication, and social cognition. The RSC integrates information from many parts of the brain to guide navigation, and pyramidal neurons play a key role in that function.

In a study of mice, the researchers first showed that three different types of input come into pyramidal neurons of the RSC: from the visual cortex into basal dendrites, from the motor cortex into apical oblique dendrites, and from the lateral nuclei of the thalamus, a visual processing area, into tuft dendrites.

“Until now, there hasn’t been much mapping of what inputs are going to those dendrites,” Harnett says. “We found that there are some sophisticated wiring rules here, with different inputs going to different dendrites.”

A range of responses

The researchers then measured electrical activity in each of those compartments. They expected that NMDA receptors would show supralinear activity, because this behavior has been demonstrated before in dendrites of pyramidal neurons in both the primary sensory cortex and the hippocampus.

In the basal dendrites, the researchers saw just what they expected: Input coming from the visual cortex provoked supralinear electrical spikes, generated by NMDA receptors. However, just 50 microns away, in the apical oblique dendrites of the same cells, the researchers found no signs of supralinear activity. Instead, input to those dendrites drives a steady linear response. Those dendrites also have a much lower density of NMDA receptors.

“That was shocking, because no one’s ever reported that before,” Harnett says. “What that means is the apical obliques don’t care about the pattern of input. Inputs can be separated in time, or together in time, and it doesn’t matter. It’s just a linear integrator that’s telling the cell how much input it’s getting, without doing any computation on it.”

Those linear inputs likely represent information such as running speed or destination, Harnett says, while the visual information coming into the basal dendrites represents landmarks or other features of the environment. The supralinearity of the basal dendrites allows them to perform more sophisticated types of computation on that visual input, which the researchers hypothesize allows the RSC to flexibly adapt to changes in the visual environment.

In the tuft dendrites, which receive input from the thalamus, it appears that NMDA spikes can be generated, but not very easily. Like the apical oblique dendrites, the tuft dendrites have a low density of NMDA receptors. Harnett’s lab is now studying what happens in all of these different types of dendrites as mice perform navigation tasks.

The research was funded by a Boehringer Ingelheim Fonds PhD Fellowship, the National Institutes of Health, the James W. and Patricia T. Poitras Fund, the Klingenstein-Simons Fellowship Program, a Vallee Scholar Award, and a McKnight Scholar Award.

A new approach to curbing cocaine use

Cocaine, opioids, and other drugs of abuse disrupt the brain’s reward system, often shifting users’ priorities to obtaining more drug above all else. For people battling addiction, this persistent craving is notoriously difficult to overcome—but new research from scientists at MIT’s McGovern Institute and collaborators points toward a therapeutic strategy that could help.

Researchers in MIT Institute Professor Ann Graybiel’s lab and collaborators at the University of Copenhagen and Vanderbilt University report in a January 25, 2022 online publication in the journal Addiction Biology that activating a signaling molecule in the brain known as muscarinic receptor 4 (M4) causes rodents to reduce cocaine self-administration and simultaneously choose a food treat over cocaine.

M4 receptors are found on the surface of neurons in the brain, where they alter signaling in response to the neurotransmitter acetylcholine. They are plentiful in the striatum, a brain region that Graybiel’s lab has shown is deeply involved in habit formation. They are of interest to addiction researchers because, along with a related receptor called M1, which is also abundant in the striatum, they often seem to act in opposition to the neurotransmitter dopamine.

Drugs of abuse stimulate the brain’s habit circuits by allowing dopamine to build up in the brain. With chronic use, that circuitry can become less sensitive to dopamine, so experiences that were once rewarding become less pleasurable and users are driven to seek higher doses of their drug. Attempts to directly block the dopamine system have not been found to be an effective way of treating addiction and can have unpleasant or dangerous side-effects, so researchers are seeking an alternative strategy to restore balance within the brain’s reward circuitry. “Another way to tweak that system is to activate these muscarinic receptors,” explains Jill Crittenden, a research scientist in the Graybiel lab.

New pathways to treatment

At the University of Copenhagen, neuroscientist Morgane Thomsen has found that activating the M1 receptor causes rodents to choose a food treat over cocaine. In the new work, she showed that a drug that selectively activates the M4 receptor has a similar effect.

When rats that have been trained to self-administer cocaine are given an M4-activating compound, they immediately reduce their drug use, actively choosing food instead. Thomsen found that this effect grew stronger over a seven-day course of treatment, with cocaine use declining day by day. When the M4-activating treatment was stopped, rats quickly resumed their prior cocaine-seeking behavior.

While Thomsen’s experiments have now shown that animals’ cocaine use can be reduced by activating either M1 or M4, it’s clear that the two muscarinic receptors don’t modulate cocaine use in the same way. M1 activation works on a different time scale, taking some time to kick in, but leaving some lasting effects even after the treatment has been discontinued.

Experiments with genetically modified mice developed in Graybiel’s lab confirm that the two receptors influence drug-seeking behavior via different molecular pathways. Previously, the team discovered that activating M1 has no effect on cocaine-seeking in mice that lack a signaling molecule called CalDAG-GEFI. M4 activation, however, reduces cocaine consumption regardless of whether CalDAG-GEFI is present. “The CalDAG-GEFI is completely essential for the M1 effect to happen, but doesn’t appear to play any role in the M4 effect,” Thomsen says. “So that really separates the pathways. In both the behavior and the neurobiology, it’s two different ways that we can modulate the cocaine effects.” The findings suggest that activating M4 could help people with substance abuse disorders overcome their addiction, and that such a strategy might be even more effective if combined with activation of the M1 receptor.

Graybiel’s lab first became interested in CalDAG-GEFI in the late 1990s, when they discovered that it was unusually abundant in the main compartment of the brain’s striatum. Their research revealed the protein to be important for controlling movement and even uncovered an essential role in blood clotting—but CalDAG-GEFI’s impacts on behavior remained elusive for a long time. Graybiel says it’s gratifying that this long-standing interest has now shed light on a potential therapeutic strategy for substance abuse disorder. Her lab will continue investigating the molecular pathways that underlie addiction as part of the McGovern Institute’s new addiction initiative.

Five MIT faculty elected 2021 AAAS Fellows

Five MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).

The 2021 class of AAAS Fellows includes 564 scientists, engineers, and innovators spanning 24 scientific disciplines who are being recognized for their scientifically and socially distinguished achievements.

Mircea Dincă is the W. M. Keck Professor of Energy in the Department of Chemistry. His group’s research focuses on addressing challenges related to the storage and consumption of energy, and global environmental concerns. Central to these efforts are the synthesis of novel organic-inorganic hybrid materials and the manipulation of their electrochemical and photophysical properties, with a current emphasis on porous materials and extended one-dimensional van der Waals materials.

Guoping Feng is the James W. and Patricia T. Poitras Professor of Neuroscience in the Department of Brain and Cognitive Sciences, associate director of MIT’s McGovern Institute for Brain Research, director of Model Systems and Neurobiology at the Stanley Center for Psychiatric Research, and an institute member of the Broad Institute of MIT and Harvard. His research is devoted to understanding the development and function of synapses in the brain and how synaptic dysfunction may contribute to neurodevelopmental and psychiatric disorders. By understanding the molecular, cellular, and circuitry mechanisms of these disorders, Feng hopes his work will eventually lead to the development of new and effective treatments for the millions of people suffering from these devastating diseases.

David Shoemaker is a senior research scientist with the MIT Kavli Institute for Astrophysics and Space Research. His work is focused on gravitational-wave observation and includes developing technologies for the detectors (LIGO, LISA), developing proposals for new instruments (Cosmic Explorer), managing the teams to build them and the consortia which exploit the data (LIGO Scientific Collaboration, LISA Consortium), and supporting the overall growth of the field (Gravitational-Wave International Committee).

Ian Hunter is the Hatsopoulos Professor of Mechanical Engineering and runs the Bioinstrumentation Lab at MIT. His main areas of research are instrumentation, microrobotics, medical devices, and biomimetic materials. Over the years he and his students have developed many instruments and devices including: confocal laser microscopes, scanning tunneling electron microscopes, miniature mass spectrometers, new forms of Raman spectroscopy, needle-free drug delivery technologies, nano- and micro-robots, microsurgical robots, robotic endoscopes, high-performance Lorentz force motors, and microarray technologies for massively parallel chemical and biological assays.

Evelyn N. Wang is the Ford Professor of Engineering and head of the Department of Mechanical Engineering. Her research program combines fundamental studies of micro/nanoscale heat and mass transport processes with the development of novel engineered structures to create innovative solutions in thermal management, energy, and water harvesting systems. Her work in thermophotovoltaics was named to Technology Review’s lists of Biggest Clean Energy Advances, in 2016, and Ten Breakthrough Technologies, in 2017, and to the Department of Energy Frontiers Research Center’s Ten of Ten awards. Her work extracting water from air has won her the title of 2017 Foreign Policy’s Global ReThinker and the 2018 Eighth Prince Sultan bin Abdulaziz International Prize for Water.

The craving state

This story originally appeared in the Winter 2022 issue of BrainScan.

***

For people struggling with substance use disorders — and there are about 35 million of them worldwide — treatment options are limited. Even among those who seek help, relapse is common. In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A 2019 survey found that 1.6 million people nationwide had an opioid use disorder, and the crisis has surged since the start of the COVID-19 pandemic. The Centers for Disease Control and Prevention estimates that more than 100,000 people died of drug overdose between April 2020 and April 2021 — nearly 30 percent more overdose deaths than occurred during the same period the previous year.

In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A deeper understanding of what addiction does to the brain and body is urgently needed to pave the way to interventions that reliably release affected individuals from its grip. At the McGovern Institute, researchers are turning their attention to addiction’s driving force: the deep, recurring craving that makes people prioritize drug use over all other wants and needs.

McGovern Institute co-founder, Lore Harp McGovern.

“When you are in that state, then it seems nothing else matters,” says McGovern Investigator Fan Wang. “At that moment, you can discard everything: your relationship, your house, your job, everything. You only want the drug.”

With a new addiction initiative catalyzed by generous gifts from Institute co-founder Lore Harp McGovern and others, McGovern scientists with diverse expertise have come together to begin clarifying the neurobiology that underlies the craving state. They plan to dissect the neural transformations associated with craving at every level — from the drug-induced chemical changes that alter neuronal connections and activity to how these modifications impact signaling brain-wide. Ultimately, the McGovern team hopes not just to understand the craving state, but to find a way to relieve it — for good.

“If we can understand the craving state and correct it, or at least relieve a little bit of the pressure,” explains Wang, who will help lead the addiction initiative, “then maybe we can at least give people a chance to use their top-down control to not take the drug.”

The craving cycle

For individuals suffering from substance use disorders, craving fuels a cyclical pattern of escalating drug use. Following the euphoria induced by a drug like heroin or cocaine, depression sets in, accompanied by a drug craving motivated by the desire to relieve that suffering. And as addiction progresses, the peaks and valleys of this cycle dip lower: the pleasant feelings evoked by the drug become weaker, while the negative effects a person experiences in its absence worsen. The craving remains, and increasing use of the drug are required to relieve it.

By the time addiction sets in, the brain has been altered in ways that go beyond a drug’s immediate effects on neural signaling.

These insidious changes leave individuals susceptible to craving — and the vulnerable state endures. Long after the physical effects of withdrawal have subsided, people with substance use disorders can find their craving returns, triggered by exposure to a small amount of the drug, physical or social cues associated with previous drug use, or stress. So researchers will need to determine not only how different parts of the brain interact with one another during craving and how individual cells and the molecules within them are affected by the craving state — but also how things change as addiction develops and progresses.

Circuits, chemistry and connectivity

One clear starting point is the circuitry the brain uses to control motivation. Thanks in part to decades of research in the lab of McGovern Investigator Ann Graybiel, neuroscientists know a great deal about how these circuits learn which actions lead to pleasure and which lead to pain, and how they use that information to establish habits and evaluate the costs and benefits of complex decisions.

Graybiel’s work has shown that drugs of abuse strongly activate dopamine-responsive neurons in a part of the brain called the striatum, whose signals promote habit formation. By increasing the amount of dopamine that neurons release, these drugs motivate users to prioritize repeated drug use over other kinds of rewards, and to choose the drug in spite of pain or other negative effects. Her group continues to investigate the naturally occurring molecules that control these circuits, as well as how they are hijacked by drugs of abuse.

Distribution of opioid receptors targeted by morphine (shown in blue) in two regions in the dorsal striatum and nucleus accumbens of the mouse brain. Image: Ann Graybiel

In Fan Wang’s lab, work investigating the neural circuits that mediate the perception of physical pain has led her team to question the role of emotional pain in craving. As they investigated the source of pain sensations in the brain, they identified neurons in an emotion-regulating center called the central amygdala that appear to suppress physical pain in animals. Now, Wang wants to know whether it might be possible to modulate neurons involved in emotional pain to ameliorate the negative state that provokes drug craving.

These animal studies will be key to identifying the cellular and molecular changes that set the brain up for recurring cravings. And as McGovern scientists begin to investigate what happens in the brains of rodents that have been trained to self-administer addictive drugs like fentanyl or cocaine, they expect to encounter tremendous complexity.

McGovern Associate Investigator Polina Anikeeva, whose lab has pioneered new technologies that will help the team investigate the full spectrum of changes that underlie craving, says it will be important to consider impacts on the brain’s chemistry, firing patterns, and connectivity. To that end, multifunctional research probes developed in her lab will be critical to monitoring and manipulating neural circuits in animal models.

Imaging technology developed by investigator Ed Boyden will also enable nanoscale protein visualization brain-wide. An important goal will be to identify a neural signature of the craving state. With such a signal, researchers can begin to explore how to shut off that craving — possibly by directly modulating neural signaling.

Targeted treatments

“One of the reasons to study craving is because it’s a natural treatment point,” says McGovern Associate Investigator Alan Jasanoff. “And the dominant kind of approaches that people in our team think about are approaches that relate to neural circuits — to the specific connections between brain regions and how those could be changed.” The hope, he explains, is that it might be possible to identify a brain region whose activity is disrupted during the craving state, then use clinical brain stimulation methods to restore normal signaling — within that region, as well as in other connected parts of the brain.

To identify the right targets for such a treatment, it will be crucial to understand how the biology uncovered in laboratory animals reflects what’s happens in people with substance use disorders. Functional imaging in John Gabrieli’s lab can help bridge the gap between clinical and animal research by revealing patterns of brain activity associated with the craving state in both humans and rodents. A new technique developed in Jasanoff’s lab makes it possible to focus on the activity between specific regions of an animal’s brain. “By doing that, we hope to build up integrated models of how information passes around the brain in craving states, and of course also in control states where we’re not experiencing craving,” he explains.

In delving into the biology of the craving state, McGovern scientists are embarking on largely unexplored territory — and they do so with both optimism and urgency. “It’s hard to not appreciate just the size of the problem, and just how devastating addiction is,” says Anikeeva. “At this point, it just seems almost irresponsible to not work on it, especially when we do have the tools and we are interested in the general brain regions that are important for that problem. I would say that there’s almost a civic duty.”

Study finds a striking difference between neurons of humans and other mammals

McGovern Institute Investigator Mark Harnett. Photo: Justin Knight

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a “building plan” that holds true for every species they looked at — except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites — tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

 

Former McGovern Institute graduate student Lou Beaulieu-Laroche is the lead author of the 2021 Nature paper.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

“This building plan is consistent across nine different mammalian species,” Harnett says. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, a Friends of the McGovern Institute Fellowship, the National Institute of General Medical Sciences, the Paul and Daisy Soros Fellows Program, the Dana Foundation David Mahoney Neuroimaging Grant Program, the National Institutes of Health, the Harvard-MIT Joint Research Grants Program in Basic Neuroscience, and Susan Haar.

Other authors of the paper include Norma Brown, an MIT technical associate; Marissa Hansen, a former post-baccalaureate scholar; Enrique Toloza, a graduate student at MIT and Harvard Medical School; Jitendra Sharma, an MIT research scientist; Ziv Williams, an associate professor of neurosurgery at Harvard Medical School; Matthew Frosch, an associate professor of pathology and health sciences and technology at Harvard Medical School; Garth Rees Cosgrove, director of epilepsy and functional neurosurgery at Brigham and Women’s Hospital; and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital.

Dealing with uncertainty

As we interact with the world, we are constantly presented with information that is unreliable or incomplete – from jumbled voices in a crowded room to solicitous strangers with unknown motivations. Fortunately, our brains are well equipped to evaluate the quality of the evidence we use to make decisions, usually allowing us to act deliberately, without jumping to conclusions.

Now, neuroscientists at MIT’s McGovern Institute have homed in on key brain circuits that help guide decision-making under conditions of uncertainty. By studying how mice interpret ambiguous sensory cues, they’ve found neurons that stop the brain from using unreliable information.

“One area cares about the content of the message—that’s the prefrontal cortex—and the thalamus seems to care about how certain the input is.” – Michael Halassa

The findings, published October 6, 2021, in the journal Nature, could help researchers develop treatments for schizophrenia and related conditions, whose symptoms may be at least partly due to affected individuals’ inability to effectively gauge uncertainty.

Decoding ambiguity

“A lot of cognition is really about handling different types of uncertainty,” says McGovern Associate Investigator Michael Halassa, explaining that we all must use ambiguous information to make inferences about what’s happening in the world. Part of dealing with this ambiguity involves recognizing how confident we can be in our conclusions. And when this process fails, it can dramatically skew our interpretation of the world around us.

“In my mind, schizophrenia spectrum disorders are really disorders of appropriately inferring the causes of events in the world and what other people think,” says Halassa, who is a practicing psychiatrist. Patients with these disorders often develop strong beliefs based on events or signals most people would dismiss as meaningless or irrelevant, he says. They may assume hidden messages are embedded in a garbled audio recording, or worry that laughing strangers are plotting against them. Such things are not impossible—but delusions arise when patients fail to recognize that they are highly unlikely.

Halassa and postdoctoral researcher Arghya Mukherjee wanted to know how healthy brains handle uncertainty, and recent research from other labs provided some clues. Functional brain imaging had shown that when people are asked to study a scene but they aren’t sure what to pay attention to, a part of the brain called the mediodorsal thalamus becomes active. The less guidance people are given for this task, the harder the mediodorsal thalamus works.

The thalamus is a sort of crossroads within the brain, made up of cells that connect distant brain regions to one another. Its mediodorsal region sends signals to the prefrontal cortex, where sensory information is integrated with our goals, desires, and knowledge to guide behavior. Previous work in the Halassa lab showed that the mediodorsal thalamus helps the prefrontal cortex tune in to the right signals during decision-making, adjusting signaling as needed when circumstances change. Intriguingly, this brain region has been found to be less active in people with schizophrenia than it is in others.

group photo of study authors
Study authors (from left to right) Michael Halassa, Arghya Mukherjee, Norman Lam and Ralf Wimmer.

Working with postdoctoral researcher Norman Lam and research scientist Ralf Wimmer, Halassa and Mukherjee designed a set of animal experiments to examine the mediodorsal thalamus’s role in handling uncertainty. Mice were trained to respond to sensory signals according to audio cues that alerted them whether to focus on either light or sound. When the animals were given conflicting cues, it was up to them animal to figure out which one was represented most prominently and act accordingly. The experimenters varied the uncertainty of this task by manipulating the numbers and ratio of the cues.

Division of labor

By manipulating and recording activity in the animals’ brains, the researchers found that the prefrontal cortex got involved every time mice completed this task, but the mediodorsal thalamus was only needed when the animals were given signals that left them uncertain how to behave. There was a simple division of labor within the brain, Halassa says. “One area cares about the content of the message—that’s the prefrontal cortex—and the thalamus seems to care about how certain the input is.”

Within the mediodorsal thalamus, Halassa and Mukherjee found a subset of cells that were especially active when the animals were presented with conflicting sound cues. These neurons, which connect directly to the prefrontal cortex, are inhibitory neurons, capable of dampening downstream signaling. So when they fire, Halassa says, they effectively stop the brain from acting on unreliable information. Cells of a different type were focused on the uncertainty that arises when signaling is sparse. “There’s a dedicated circuitry to integrate evidence across time to extract meaning out of this kind of assessment,” Mukherjee explains.

As Halassa and Mukherjee investigate these circuits more deeply, a priority will be determining whether they are disrupted in people with schizophrenia. To that end, they are now exploring the circuitry in animal models of the disorder. The hope, Mukherjee says, is to eventually target dysfunctional circuits in patients, using noninvasive, focused drug delivery methods currently under development. “We have the genetic identity of these circuits. We know they express specific types of receptors, so we can find drugs that target these receptors,” he says. “Then you can specifically release these drugs in the mediodorsal thalamus to modulate the circuits as a potential therapeutic strategy.”

This work was funded by grants from the National Institute of Mental Health (R01MH107680-05 and R01MH120118-02).

Single gene linked to repetitive behaviors, drug addiction

Making and breaking habits is a prime function of the striatum, a large forebrain region that underlies the cerebral cortex. McGovern researchers have identified a particular gene that controls striatal function as well as repetitive behaviors that are linked to drug addiction vulnerability.

To identify genes involved specifically in striatal functions, MIT Institute Professor Ann Graybiel previously identified genes that are preferentially expressed in striatal neurons. One identified gene encodes CalDAG-GEFI (CDGI), a signaling molecule that effects changes inside of cells in response to extracellular signals that are received by receptors on the cell surface. In a paper to be published in the October issue of Neurobiology of Disease and now available online, Graybiel, along with former Research Scientist Jill Crittenden and collaborators James Surmeier and Shenyu Zhai at the Feinman School of Medicine at Northwestern University, show that CDGI is key for controlling behavioral responses to drugs of abuse and underlying neuronal plasticity (cellular changes induced by experience) in the striatum.

“This paper represents years of intensive research, which paid off in the end by identifying a specific cellular signaling cascade for controlling repetitive behaviors and neuronal plasticity,” says Graybiel, who is also an investigator at the McGovern Institute and a professor of brain and cognitive sciences at MIT.

McGovern Investigator Ann Graybiel (right) with former Research Scientist Jill Crittenden. Photo: Justin Knight

Surprise discovery

To understand the essential roles of CDGI, Crittenden first engineered “knockout” mice that lack the gene encoding CDGI. Then the Graybiel team began looking for abnormalities in the CDGI knockout mice that could be tied to the loss of CDGI’s function.

Initially, they noticed that the rodent ear-tag IDs often fell off in the knockout mice, an observation that ultimately led to the surprise discovery by the Graybiel team and others that CDGI is expressed in blood platelets and is responsible for a bleeding disorder in humans, dogs, and other animals. The CDGI knockout mice were otherwise healthy and seemed just like their “wildtype” brothers and sisters, which did not carry the gene mutation. To figure out the role of CDGI in the brain, the Graybiel team would have to scrutinize the mice more closely.

Challenging the striatum

Both the CDGI knockout and wildtype mice were given an extensive set of behavioral and neurological tests and the CDGI mice showed deficits in two tests designed to challenge the striatum.

In one test, mice must find their way through a maze by relying on egocentric (i.e. self-referential) cues, such as their turning right or turning left, and not competing allocentric (i.e. external) cues, such as going toward a bright poster on the wall. Egocentric cues are thought to be processed by the striatum whereas allocentric cues are thought to rely on the hippocampus.

In a second test of striatal function, mice learned various gait patterns to match different patterns of rungs on their running wheel, a task designed to test the mouse’s ability to learn and remember a motor sequence.

The CDGI mice learned both of these striatal tasks more slowly than their wildtype siblings, suggesting that the CDGI mice might perform normally in general tests of behavior because they are able to compensate for striatal deficits by using other brain regions such as the hippocampus to solve standard tasks.

The team then decided to give the mice a completely different type of test that relies on the striatum. Because the striatum is strongly activated by drugs of abuse, which elevate dopamine and drive motor habits, Crittenden and collaborator Morgane Thomsen (now at the University of Copenhagen) looked to see whether the CDGI knockout mice respond normally to amphetamine and cocaine.

Psychomotor stimulants like cocaine and amphetamine normally induce a mixture of hyperactive behaviors such as pacing and focused repetitive behaviors like skin-picking (also called stereotypy or punding in humans). The researchers found however, that the drug-induced behaviors in the CDGI knockout mice were less varied than the normal mice and consisted of abnormally prolonged stereotypy, as though the mice were unable to switch between behaviors. The researchers were able to map the abnormal behavior to CDGI function in the striatum by showing that the same vulnerability to drug-induced stereotypy was observed in mice that were engineered to delete CDGI in the striatum after birth (“conditional knockouts”), but to otherwise have normal CDGI throughout the body.

Controlling cravings

In addition to exhibiting prolonged, repetitive behaviors, the CDGI knockout mice had a vulnerability to self-administer drugs. Although previous research had shown that treatments that activate the M1 acetylcholine receptor can block cocaine self-administration, the team found that this therapy was ineffective in CDGI knockout mice. Knockouts continued to self-administer cocaine (suggesting increased craving for the drug) at the same rate before and after M1 receptor activation treatment, even though the treatment succeeded with their sibling control mice. The researchers concluded that CDGI is critically important for controlling repetitive behaviors and the ability to stop self-administration of addictive stimulants.

mouse brain images
Brain sections from control mice (left) and mice engineered for deletion of the CDGI gene after birth. The expression of CDGI in the striatum (arrows) grows stronger as mice grow from pups to adulthood in control mice, but is gradually lost in the CDGI engineered mice (“conditional knockouts”). Image courtesy of the researchers

To better understand how CDGI is linked to the M1 receptor at the cellular level, the team turned to slice physiologists, scientists who record the electrical activity of neurons in brain slices. Their recordings showed that striatal neurons from CDGI knockouts fail to undergo the normal, expected electrophysiological changes after receiving treatments that target the M1 receptor. In particular, the neurons of the striatum that function broadly to stop ongoing behaviors, did not integrate cellular signals properly and failed to undergo “long-term potentiation,” a type of neuronal plasticity thought to underlie learning.

The new findings suggest that excessive repetitive movements are controlled by M1 receptor signaling through CDGI in indirect pathway neurons of the striatum, a neuronal subtype that degenerates in Huntington’s disease and is affected by dopamine loss and l-DOPA replacement therapy in Parkinson’s disease.

“The M1 acetylcholine receptor is a target for therapeutic drug development in treating cognitive and behavioral problems in multiple disorders, but progress has been severely hampered by off-target side-effects related to the wide-spread expression of the M1 receptor,” Graybiel explains. “Our findings suggest that CDGI offers the possibility for forebrain-specific targeting of M1 receptor signaling cascades that are of interest for blocking pathologically repetitive and unwanted behaviors that are common to numerous brain disorders including Huntington’s disease, drug addiction, autism, and schizophrenia as well as drug-induced dyskinesias. We hope that this work can help therapeutic development for these major health problems.”

This work was funded by the James W. (1963) and Patricia T. Poitras Fund, the William N. & Bernice E. Bumpus Foundation, the Saks Kavanaugh Foundation, the Simons Foundation, and the National Institute of Health.

New programmable gene editing proteins found outside of CRISPR systems

Within the last decade, scientists have adapted CRISPR systems from microbes into gene editing technology, a precise and programmable system for modifying DNA. Now, scientists at MIT’s McGovern Institute and the Broad Institute of MIT and Harvard have discovered a new class of programmable DNA modifying systems called OMEGAs (Obligate Mobile Element Guided Activity), which may naturally be involved in shuffling small bits of DNA throughout bacterial genomes.

These ancient DNA-cutting enzymes are guided to their targets by small pieces of RNA. While they originated in bacteria, they have now  been engineered to work in human cells, suggesting they could be useful in the development of gene editing therapies, particularly as they are small (~30% the size of Cas9), making them easier to deliver to cells than bulkier enzymes. The discovery, reported September 9, 2021, in the journal Science, provides evidence that natural RNA-guided enzymes are among the most abundant proteins on earth, pointing toward a vast new area of biology that is poised to drive the next revolution in genome editing technology.

The research was led by McGovern Investigator Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and a Core Institute Member of the Broad Institute. Zhang’s team has been exploring natural diversity in search of new molecular systems that can be rationally programmed.

“We are super excited about the discovery of these widespread programmable enzymes, which have been hiding under our noses all along,” says Zhang. “These results suggest the tantalizing possibility that there are many more programmable systems that await discovery and development as useful technologies.”

Natural adaptation

Programmable enzymes, particularly those that use an RNA guide, can be rapidly adapted for different uses. For example, CRISPR enzymes naturally use an RNA guide to target viral invaders, but biologists can direct Cas9 to any target by generating their own RNA guide. “It’s so easy to just change a guide sequence and set a new target,” says graduate student and co-first author of the paper, Soumya Kannan. “So one of the broad questions that we’re interested in is trying to see if other natural systems use that same kind of mechanism.”

Zhang lab graduate student Han Altae-Tran, co-author of the Science paper with Soumya Kannan. Photo: Zhang lab

The first hints that OMEGA proteins might be directed by RNA came from the genes for proteins called IscBs. The IscBs are not involved in CRISPR immunity and were not known to associate with RNA, but they looked like small, DNA-cutting enzymes. The team discovered that each IscB had a small RNA encoded nearby and it directed IscB enzymes to cut specific DNA sequences. They named these RNAs “ωRNAs.”

The team’s experiments showed that two other classes of small proteins known as IsrBs and TnpBs, one of the most abundant genes in bacteria, also use ωRNAs that act as guides to direct the cleavage of DNA.

IscB, IsrB, and TnpB are found in mobile genetic elements called transposons. Graduate student Han Altae-Tran, co-first author on the paper, explains that each time these transposons move, they create a new guide RNA, allowing the enzyme they encode to cut somewhere else.

It’s not clear how bacteria benefit from this genomic shuffling—or whether they do at all.  Transposons are often thought of as selfish bits of DNA, concerned only with their own mobility and preservation, Kannan says. But if hosts can “co-opt” these systems and repurpose them, hosts may gain new abilities, as with CRISPR systems which confer adaptive immunity.

“A lot of the things that we have been thinking about may already exist naturally in some capacity,” says Altae-Tran.

IscBs and TnpBs appear to be predecessors of Cas9 and Cas12 CRISPR systems. The team suspects they, along with IsrB, likely gave rise to other RNA-guided enzymes, too—and they are eager to find them. They are curious about the range of functions that might be carried out in nature by RNA-guided enzymes, Kannan says, and suspect evolution likely already took advantage of OMEGA enzymes like IscBs and TnpBs to solve problems that biologists are keen to tackle.

Comparison of Ω (OMEGA) systems with other known RNA-guided systems. In contrast to CRISPR systems, which capture spacer sequences and store them in the locus within the CRISPR array, Ω systems may transpose their loci (or trans-acting loci) into target sequences, converting targets into ωRNA guides. Image courtesy of the researchers.

“A lot of the things that we have been thinking about may already exist naturally in some capacity,” says Altae-Tran. “Natural versions of these types of systems might be a good starting point to adapt for that particular task.”

The team is also interested in tracing the evolution of RNA-guided systems further into the past. “Finding all these new systems sheds light on how RNA-guided systems have evolved, but we don’t know where RNA-guided activity itself comes from,” Altae-Tran says. Understanding those origins, he says, could pave the way to developing even more classes of programmable tools.

This work was made possible with support from the Simons Center for the Social Brain at MIT; National Institutes of Health Intramural Research Program; National Institutes of Health grants 1R01-HG009761 and 1DP1-HL141201; Howard Hughes Medical Institute; Open Philanthropy; G. Harold and Leila Y. Mathers Charitable Foundation; Edward Mallinckrodt, Jr. Foundation; Poitras Center for Psychiatric Disorders Research at MIT; Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT; Yang-Tan Center for Molecular Therapeutics at MIT; Lisa Yang; Phillips family; R. Metcalfe; and J. and P. Poitras.

Mapping the cellular circuits behind spitting

For over a decade, researchers have known that the roundworm Caenorhabditis elegans can detect and avoid short-wavelength light, despite lacking eyes and the light-absorbing molecules required for sight. As a graduate student in the Horvitz lab, Nikhil Bhatla proposed an explanation for this ability. He observed that light exposure not only made the worms wriggle away, but it also prompted them to stop eating. This clue led him to a series of studies that suggested that his squirming subjects weren’t seeing the light at all — they were detecting the noxious chemicals it produced, such as hydrogen peroxide. Soon after, the Horvitz lab realized that worms not only taste the nasty chemicals light generates, they also spit them out.

Now, in a study recently published in eLife, a team led by former graduate student Steve Sando reports the mechanism that underlies spitting in C. elegans. Individual muscle cells are generally regarded as the smallest units that neurons can independently control, but the researchers’ findings question this assumption. In the case of spitting, they determined that neurons can direct specialized subregions of a single muscle cell to generate multiple motions — expanding our understanding of how neurons control muscle cells to shape behavior.

“Steve made the remarkable discovery that the contraction of a small region of a particular muscle cell can be uncoupled from the contraction of the rest of the same cell,” says H. Robert Horvitz, the David H. Koch Professor of Biology at MIT, a member of the McGovern Institute for Brain Research and the Koch Institute for Integrative Cancer Research, Howard Hughes Medical Institute Investigator, and senior author of the study. “Furthermore, Steve found that such subcellular muscle compartments can be controlled by neurons to dramatically alter behavior.”

Roundworms are like vacuum cleaners that wiggle around hoovering up bacteria. The worm’s mouth, also known as the pharynx, is a muscular tube that traps the food, chews it, and then transfers it to the intestines through a series of “pumping” contractions.

Researchers have known for over a decade that worms flee from UV, violet, or blue light. But Bhatla discovered that this light also interrupts the constant pumping of the pharynx, because the taste produced by the light is so nasty that the worms pause feeding. As he looked closer, Bhatla noticed the worms’ response was actually quite nuanced. After an initial pause, the pharynx briefly starts pumping again in short bursts before fully stopping — almost like the worm was chewing for a bit even after tasting the unsavory light. Sometimes, a bubble would escape from the mouth, like a burp.

After he joined the project, Sando discovered that the worms were neither burping nor continuing to munch. Instead, the “burst pumps” were driving material in the opposite direction, out of the mouth into the local environment, rather than further back into the pharynx and intestine. In other words, the bad-tasting light caused worms to spit. Sando then spent years chasing his subjects around the microscope with a bright light and recording their actions in slow motion, in order to pinpoint the neural circuitry and muscle motions required for this behavior.

“The discovery that the worms were spitting was quite surprising to us, because the mouth seemed to be moving just like it does when it’s chewing,” Sando says. “It turns out that you really needed to zoom in and slow things down to see what’s going on, because the animals are so small and the behavior is happening so quickly.”

To analyze what’s happening in the pharynx to produce this spitting motion, the researchers used a tiny laser beam to surgically remove individual nerve and muscle cells from the mouth and discern how that affected the worm’s behavior. They also monitored the activity of the cells in the mouth by tagging them with specially-engineered fluorescent “reporter” proteins.

They saw that while the worm is eating, three muscle cells towards the front of the pharynx called pm3s contract and relax together in synchronous pulses. But as soon as the worm tastes light, the subregions of these individual cells closest to the front of the mouth become locked in a state of contraction, opening the front of the mouth and allowing material to be propelled out. This reverses the direction of the flow of the ingested material and converts feeding into spitting.

The team determined that this “uncoupling” phenomenon is controlled by a single neuron at the back of the worm’s mouth. Called M1, this nerve cell spurs a localized influx of calcium at the front end of the pm3 muscle likely responsible for triggering the sub-cellular contractions.

M1 relays important information like a switchboard. It receives incoming signals from many different neurons, and transmits that information to the muscles involved in spitting. Sando and his team suspect that the strength of the incoming signal can tune the worm’s behavior in response to tasting light. For instance, their findings suggest that a revolting taste elicits a vigorous rinsing of the mouth, while a mildly unpleasant sensation causes the worm spit more gently, just enough to eject the contents.

In the future, Sando thinks the worm could be used as a model to study how neurons trigger subregions of muscle cells to constrict and shape behavior — a phenomenon they suspect occurs in other animals, possibly including humans.

“We’ve essentially found a new way for a neuron to move a muscle,” Sando says. “Neurons orchestrate the motions of muscles, and this could be a new tool that allows them to exert a sophisticated kind of control. That’s pretty exciting.”

Some brain disorders exhibit similar circuit malfunctions

Many neurodevelopmental disorders share similar symptoms, such as learning disabilities or attention deficits. A new study from MIT has uncovered a common neural mechanism for a type of cognitive impairment seen in some people with autism and schizophrenia, even though the genetic variations that produce the impairments are different for each condition.

In a study of mice, the researchers found that certain genes that are mutated or missing in some people with those disorders cause similar dysfunctions in a neural circuit in the thalamus. If scientists could develop drugs that target this circuit, they could be used to treat people who have different disorders with common behavioral symptoms, the researchers say.

“This study reveals a new circuit mechanism for cognitive impairment and points to a future direction for developing new therapeutics, by dividing patients into specific groups not by their behavioral profile, but by the underlying neurobiological mechanisms,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, the associate director of the McGovern Institute for Brain Research at MIT, and the senior author of the new study.

Dheeraj Roy, a Warren Alpert Distinguished Scholar and a McGovern Fellow at the Broad Institute, and Ying Zhang, a postdoc at the McGovern Institute, are the lead authors of the paper, which appears today in Neuron.

Thalamic connections

The thalamus plays a key role in cognitive tasks such as memory formation and learning. Previous studies have shown that many of the gene variants linked to brain disorders such as autism and schizophrenia are highly expressed in the thalamus, suggesting that it may play a role in those disorders.

One such gene is called Ptchd1, which Feng has studied extensively. In boys, loss of this gene, which is carried on the X chromosome, can lead to attention deficits, hyperactivity, aggression, intellectual disability, and autism spectrum disorders.

In a study published in 2016, Feng and his colleagues showed that Ptchd1 exerts many of its effects in a part of the thalamus called the thalamic reticular nucleus (TRN). When the gene is knocked out in the TRN of mice, the mice show attention deficits and hyperactivity. However, that study did not find any role for the TRN in the learning disabilities also seen in people with mutations in Ptchd1.

In the new study, the researchers decided to look elsewhere in the thalamus to try to figure out how Ptchd1 loss might affect learning and memory. Another area they identified that highly expresses Ptchd1 is called the anterodorsal (AD) thalamus, a tiny region that is involved in spatial learning and communicates closely with the hippocampus.

Using novel techniques that allowed them to trace the connections between the AD thalamus and another brain region called the retrosplenial cortex (RSC), the researchers determined a key function of this circuit. They found that in mice, the AD-to-RSC circuit is essential for encoding fearful memories of a chamber in which they received a mild foot shock. It is also necessary for working memory, such as creating mental maps of physical spaces to help in decision-making.

The researchers found that a nearby part of the thalamus called the anteroventral (AV) thalamus also plays a role in this memory formation process: AV-to-RSC communication regulates the specificity of the encoded memory, which helps us distinguish this memory from others of similar nature.

“These experiments showed that two neighboring subdivisions in the thalamus contribute differentially to memory formation, which is not what we expected,” Roy says.

Circuit malfunction

Once the researchers discovered the roles of the AV and AD thalamic regions in memory formation, they began to investigate how this circuit is affected by loss of Ptchd1. When they knocked down expression of Ptchd1 in neurons of the AD thalamus, they found a striking deficit in memory encoding, for both fearful memories and working memory.

The researchers then did the same experiments with a series of four other genes — one that is linked with autism and three linked with schizophrenia. In all of these mice, they found that knocking down gene expression produced the same memory impairments. They also found that each of these knockdowns produced hyperexcitability in neurons of the AD thalamus.

These results are consistent with existing theories that learning occurs through the strengthening of synapses that occurs as a memory is formed, the researchers say.

“The dominant theory in the field is that when an animal is learning, these neurons have to fire more, and that increase correlates with how well you learn,” Zhang says. “Our simple idea was if a neuron fires too high at baseline, you may lack a learning-induced increase.”

The researchers demonstrated that each of the genes they studied affects different ion channels that influence neurons’ firing rates. The overall effect of each mutation is an increase in neuron excitability, which leads to the same circuit-level dysfunction and behavioral symptoms.

The researchers also showed that they could restore normal cognitive function in mice with these genetic mutations by artificially turning down hyperactivity in neurons of the AD thalamus. The approach they used, chemogenetics, is not yet approved for use in humans. However, it may be possible to target this circuit in other ways, the researchers say.

The findings lend support to the idea that grouping diseases by the circuit malfunctions that underlie them may help to identify potential drug targets that could help many patients, Feng says.

“There are so many genetic factors and environmental factors that can contribute to a particular disease, but in the end, it has to cause some type of neuronal change that affects a circuit or a few circuits involved in this behavior,” he says. “From a therapeutic point of view, in such cases you may not want to go after individual molecules because they may be unique to a very small percentage of patients, but at a higher level, at the cellular or circuit level, patients may have more commonalities.”

The research was funded by the Stanley Center at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, and the National Institutes of Health BRAIN Initiative.