Some brain disorders exhibit similar circuit malfunctions

Many neurodevelopmental disorders share similar symptoms, such as learning disabilities or attention deficits. A new study from MIT has uncovered a common neural mechanism for a type of cognitive impairment seen in some people with autism and schizophrenia, even though the genetic variations that produce the impairments are different for each condition.

In a study of mice, the researchers found that certain genes that are mutated or missing in some people with those disorders cause similar dysfunctions in a neural circuit in the thalamus. If scientists could develop drugs that target this circuit, they could be used to treat people who have different disorders with common behavioral symptoms, the researchers say.

“This study reveals a new circuit mechanism for cognitive impairment and points to a future direction for developing new therapeutics, by dividing patients into specific groups not by their behavioral profile, but by the underlying neurobiological mechanisms,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, the associate director of the McGovern Institute for Brain Research at MIT, and the senior author of the new study.

Dheeraj Roy, a Warren Alpert Distinguished Scholar and a McGovern Fellow at the Broad Institute, and Ying Zhang, a postdoc at the McGovern Institute, are the lead authors of the paper, which appears today in Neuron.

Thalamic connections

The thalamus plays a key role in cognitive tasks such as memory formation and learning. Previous studies have shown that many of the gene variants linked to brain disorders such as autism and schizophrenia are highly expressed in the thalamus, suggesting that it may play a role in those disorders.

One such gene is called Ptchd1, which Feng has studied extensively. In boys, loss of this gene, which is carried on the X chromosome, can lead to attention deficits, hyperactivity, aggression, intellectual disability, and autism spectrum disorders.

In a study published in 2016, Feng and his colleagues showed that Ptchd1 exerts many of its effects in a part of the thalamus called the thalamic reticular nucleus (TRN). When the gene is knocked out in the TRN of mice, the mice show attention deficits and hyperactivity. However, that study did not find any role for the TRN in the learning disabilities also seen in people with mutations in Ptchd1.

In the new study, the researchers decided to look elsewhere in the thalamus to try to figure out how Ptchd1 loss might affect learning and memory. Another area they identified that highly expresses Ptchd1 is called the anterodorsal (AD) thalamus, a tiny region that is involved in spatial learning and communicates closely with the hippocampus.

Using novel techniques that allowed them to trace the connections between the AD thalamus and another brain region called the retrosplenial cortex (RSC), the researchers determined a key function of this circuit. They found that in mice, the AD-to-RSC circuit is essential for encoding fearful memories of a chamber in which they received a mild foot shock. It is also necessary for working memory, such as creating mental maps of physical spaces to help in decision-making.

The researchers found that a nearby part of the thalamus called the anteroventral (AV) thalamus also plays a role in this memory formation process: AV-to-RSC communication regulates the specificity of the encoded memory, which helps us distinguish this memory from others of similar nature.

“These experiments showed that two neighboring subdivisions in the thalamus contribute differentially to memory formation, which is not what we expected,” Roy says.

Circuit malfunction

Once the researchers discovered the roles of the AV and AD thalamic regions in memory formation, they began to investigate how this circuit is affected by loss of Ptchd1. When they knocked down expression of Ptchd1 in neurons of the AD thalamus, they found a striking deficit in memory encoding, for both fearful memories and working memory.

The researchers then did the same experiments with a series of four other genes — one that is linked with autism and three linked with schizophrenia. In all of these mice, they found that knocking down gene expression produced the same memory impairments. They also found that each of these knockdowns produced hyperexcitability in neurons of the AD thalamus.

These results are consistent with existing theories that learning occurs through the strengthening of synapses that occurs as a memory is formed, the researchers say.

“The dominant theory in the field is that when an animal is learning, these neurons have to fire more, and that increase correlates with how well you learn,” Zhang says. “Our simple idea was if a neuron fires too high at baseline, you may lack a learning-induced increase.”

The researchers demonstrated that each of the genes they studied affects different ion channels that influence neurons’ firing rates. The overall effect of each mutation is an increase in neuron excitability, which leads to the same circuit-level dysfunction and behavioral symptoms.

The researchers also showed that they could restore normal cognitive function in mice with these genetic mutations by artificially turning down hyperactivity in neurons of the AD thalamus. The approach they used, chemogenetics, is not yet approved for use in humans. However, it may be possible to target this circuit in other ways, the researchers say.

The findings lend support to the idea that grouping diseases by the circuit malfunctions that underlie them may help to identify potential drug targets that could help many patients, Feng says.

“There are so many genetic factors and environmental factors that can contribute to a particular disease, but in the end, it has to cause some type of neuronal change that affects a circuit or a few circuits involved in this behavior,” he says. “From a therapeutic point of view, in such cases you may not want to go after individual molecules because they may be unique to a very small percentage of patients, but at a higher level, at the cellular or circuit level, patients may have more commonalities.”

The research was funded by the Stanley Center at the Broad Institute, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, and the National Institutes of Health BRAIN Initiative.

Michale Fee appointed head of MIT’s Brain and Cognitive Sciences Department

McGovern Investigator Michale Fee at work in the lab with postdoc Galen Lynch. Photo: Justin Knight

Michale Fee, the Glen V. and Phyllis F. Dorflinger Professor of Brain and Cognitive Sciences, has been named as the new head of the Department of Brain and Cognitive Sciences (BCS) effective May 1, 2021.

Fee, who is an investigator in the McGovern Institute for Brain Research, succeeds James DiCarlo, the Peter de Florez Professor of Neuroscience, who announced in December that he was stepping down to become director of the MIT Quest for Intelligence.

“I want to thank Jim for his impressive work over the last nine years as head,” says Fee. “I know firsthand from my time as associate department head that BCS is in good shape and on a steady course. Jim has set a standard of transparent and collaborative leadership, which is a solid foundation for making our community stronger on all fronts.” Fee notes that his first mission is to continue the initiatives begun under DiCarlo’s leadership—in academics (especially Course 6-9), mentoring, and diversity, equity, inclusion, and justice—while maintaining the highest standards of excellence in research and education.

“Jim has overseen significant growth in the faculty and its impact, as well as important academic initiatives to strengthen the department’s graduate and undergraduate programs,” says Nergis Mavalvala, dean of the School of Science. “His emphasis on building ties among BCS, the McGovern Institute for Brain Research, and the Picower Institute for Learning and Memory has brought innumerable new collaborations among researchers and helped solidify Building 46 and MIT as world leaders in brain science.”

Fee earned his BE in engineering physics in 1985 at the University of Michigan, and his PhD in applied physics at Stanford University in 1992, under the mentorship of Nobel laureate Stephen Chu. His doctoral work was followed by research in the Biological Computation Department at Bell Laboratories. He joined MIT and BCS as an associate professor in 2003 and was promoted to full professor in 2008.

He has served since 2012 as associate department head for education in BCS, overseeing significant evolution in the department’s academic programs, including a complete reworking of the Course 9 curriculum and the establishment in 2019 of Course 6-9, Computation and Cognition, in partnership with EECS.

In his research, Fee explores the neural mechanisms by which the brain learns complex sequential behaviors, using the learning of song by juvenile zebra finches as a model. He has brought new experimental and computational methods to bear on these questions, identifying a number of circuits used to learn, modify, time, and coordinate the development and utterance of song syllables.

“His work is emblematic of the department in that it crosses technical and disciplinary boundaries in search of the most significant discoveries,” says DiCarlo. “His research background gives Michale a deep appreciation of the importance of every sub-discipline in our community and a broad understanding of the importance of their connections with each other.”

Fee has received numerous honors and awards for his research and teaching, including the MIT Fundamental Science Investigator Award in 2017, the MIT School of Science Teaching Prize for Undergraduate Education in 2016, the BCS Award for Excellence in Undergraduate Teaching in 2015, and the Lawrence Katz Prize for Innovative Research in Neuroscience from Duke University in 2012.

Fee will be the sixth head of the department, after founding chair Hans-Lukas Teuber (1964–77), Richard Held (1977–86), Emilio Bizzi (1986–97), Mriganka Sur (1997–2012), and James DiCarlo (2012–21).

Gene changes linked to severe repetitive behaviors

Extreme repetitive behaviors such as hand-flapping, body-rocking, skin-picking and sniffing are common to a number of brain disorders including autism, schizophrenia, Huntington’s disease, and drug addiction. These behaviors, termed stereotypies, are also apparent in animal models of drug addiction and autism.

In a new study published in the European Journal of Neuroscience, researchers at the McGovern Institute have identified genes that are activated in the brain prior to the initiation of these severe repetitive behaviors.

“Our lab has found a small set of genes that are regulated in relation to the development of stereotypic behaviors in an animal model of drug addiction,” says MIT Institute Professor Ann Graybiel, who is the senior author of the paper. “We were surprised and interested to see that one of these genes is a susceptibility gene for schizophrenia. This finding might help to understand the biological basis of repetitive, stereotypic behaviors as seen in a range of neurologic and neuropsychiatric disorders, and in otherwise ‘typical’ people under stress.”

A shared molecular pathway

In work led by research scientist Jill Crittenden, researchers in the Graybiel lab exposed mice to amphetamine, a psychomotor stimulant that drives hyperactivity and confined stereotypies in humans and in laboratory animals and that is used to model symptoms of schizophrenia.

They found that stimulant exposure that drives the most prolonged repetitive behaviors lead to activation of genes regulated by Neuregulin 1, a signaling molecule that is important for a variety of cellular functions including neuronal development and plasticity. Neuregulin 1 gene mutations are risk factors for schizophrenia.

The new findings highlight a shared molecular and circuit pathway for stereotypies that are caused by drugs of abuse and in brain disorders, and have implications for why stimulant intoxication is a risk factor for the onset of schizophrenia.

“Experimental treatment with amphetamine has long been used in studies on rodents and other animals in tests to find better treatments for schizophrenia in humans, because there are some behavioral similarities across the two otherwise very different contexts,” explains Graybiel, who is also an investigator at the McGovern Institute and a professor of brain and cognitive sciences at MIT. “It was striking to find Neuregulin 1 — potentially one hint to shared mechanisms underlying some of these similarities.”

Drug exposure linked to repetitive behaviors

Although many studies have measured gene expression changes in animal models of drug addiction, this study is the first to evaluate genome-wide changes specifically associated with restricted repetitive behaviors.

Stereotypies are difficult to measure without labor-intensive, direct observation, because they consist of fine movements and idiosyncratic behaviors. In this study, the authors administered amphetamine (or saline control) to mice and then measured with photobeam-breaks how much they ran around. The researchers identified prolonged periods when the mice were not running around (e.g. were potentially engaged in confined stereotypies), and then they videotaped the mice during these periods to observationally score the severity of restricted repetitive behaviors (e.g. sniffing or licking stereotypies).

They gave amphetamine to each mouse once a day for 21 days and found that, on average, mice showed very little stereotypy on the first day of drug exposure but that, by the seventh day of exposure, all of the mice showed a prolonged period of stereotypy that gradually became shorter and shorter over the subsequent two weeks.

Graphical abstract
The authors compared gene expression changes in the brains of mice treated with amphetamine for one day, seven days or 21 days. By the twenty-first day of treatment, the stereotypy behaviors were less intense as was the gene upregulation – fewer genes were strongly activated, and more were repressed, relative to the other treatments.

“We were surprised to see the stereotypy diminishing after one week of treatment. We had actually planned a study based on our expectation that the repetitive behaviors would become more intense, but then we realized that this was an opportunity to look at what gene changes were unique to that day of high stereotypy,” says first author Jill Crittenden.

The authors compared gene expression changes in the brains of mice treated with amphetamine for one day, seven days or 21 days. They hypothesized that the gene changes associated specifically with high-stereotypy-associated seven days of drug treatment were the most likely to underlie extreme repetitive behaviors and could identify risk-factor genes for such symptoms in disease.

A shared anatomical pathway

Previous work from the Graybiel lab has shown that stereotypy is directly correlated to circumscribed gene activation in the striatum, a forebrain region that is key for habit formation. In animals with the most intense stereotypy, most of the striatum does not show gene activation, but immediate early gene induction remains high in clusters of cells called striosomes. Striosomes have recently been shown to have powerful control over cells that release dopamine, a neuromodulator that is severely disrupted in drug addiction and in schizophrenia. Strikingly, striosomes contain high levels of Neuregulin 1.

“Our new data suggest that the upregulation of Neuregulin-responsive genes in animals with severely repetitive behaviors reflects gene changes in the striosomal neurons that control the release of dopamine,” Crittenden explains. “Dopamine can directly impact whether an animal repeats an action or explores new actions, so our study highlights a potential role for a striosomal circuit in controlling action-selection in health and in neuropsychiatric disease.”

Patterns of behavior and gene expression

Striatal gene expression levels were measured by sequencing messenger RNAs (mRNAs) in dissected brain tissue. mRNAs are read out from “active” genes to instruct protein-synthesis machinery in how to make the protein that corresponds to the gene’s sequence. Proteins are the main constituents of a cell, thereby controlling each cell’s function. The number of times a particular mRNA sequence is found reflects the frequency at which the gene was being read out at the time that the cellular material was collected.

To identify genes that were read out into mRNA before the period of prolonged stereotypy, the researchers collected brain tissue 20 minutes after amphetamine injection, which is about 30 minutes before peak stereotypy. They then identified which genes had significantly different levels of corresponding mRNAs in drug-treated mice than in mice treated with saline.

A wide variety of genes showed modest mRNA increases after the first amphetamine exposure, which induced mild hyperactivity and a range of behaviors such as walking, sniffing and rearing in the mice.

By the seventh day of treatment, all of the mice were engaged for prolonged periods in one specific repetitive behavior, such as sniffing the wall. Likewise, there were fewer genes that were activated by the seventh day relative to the first treatment day, but they were strongly activated in all mice that received the stereotypy-inducing amphetamine treatment.

By the twenty-first day of treatment, the stereotypy behaviors were less intense as was the gene upregulation – fewer genes were strongly activated, and more were repressed, relative to the other treatments. “It seemed that the mice had developed tolerance to the drug, both in terms of their behavioral response and in terms of their gene activation response,” says Crittenden.

“Trying to seek patterns of gene regulation starting with behavior is correlative work, and we did not prove ‘causality’ in this first small study,” explains Graybiel. “But we hope that the striking parallels between the scope and selectivity of the mRNA and behavioral changes that we detected will help in further work on the tremendously challenging goal of treating addiction.”

This work was funded by the National Institute of Child Health and Human Development, the Saks-Kavanaugh Foundation, the Broderick Fund for Phytocannabinoid Research at MIT, the James and Pat Poitras Research Fund, The Simons Foundation and The Stanley Center for Psychiatric Research at the Broad Institute.

The pursuit of reward

View the interactive version of this story in our Spring 2021 issue of BrainScan.

The brain circuits that influence our decisions, cognitive functions, and ultimately, our actions are intimately connected with the circuits that give rise to our motivations. By exploring these relationships, scientists at McGovern are seeking knowledge that might suggest new strategies for changing our habits or treating motivation-disrupting conditions such as depression and addiction.

Risky decisions

MIT Institute Professor Ann Graybiel. Photo: Justin Knight

In Ann Graybiel’s lab, researchers have been examining how the brain makes choices that carry both positive and negative consequences — deciding to take on a higher-paying but more demanding job, for example. Psychologists call these dilemmas approach-avoidance conflicts, and resolving them not only requires weighing the good versus the bad, but also motivation to engage with the decision.

Emily Hueske, a research scientist in the Graybiel lab, explains that everyone has their own risk tolerance when it comes to such decisions, and certain psychiatric conditions, including depression and anxiety disorders, can shift the tipping point at which a person chooses to “approach” or “avoid.”

Studies have shown that neurons in the striatum (see image below), a region deep in the brain involved in both motivation and movement, activate as we grapple with these decisions. Graybiel traced this activity even further, to tiny compartments within the striatum called striosomes.

(She discovered striosomes many years ago and has been studying their function for decades.)

A motivational switch

In 2015, Graybiel’s team manipulated striosome signaling within genetically engineered mice and changed the way animals behave in approach-avoidance conflict situations. Taking cues from an assessment used to evaluate approach-avoidance behavior in patients, they presented mice with opportunities to obtain chocolate while experiencing unwelcome exposure in a brightly lit area.

Experimentally activating neurons in striosomes had a dramatic effect, causing mice to venture into brightly lit areas that they would normally avoid. With striosomal circuits switched on, “this animal all of a sudden is like a different creature,” Graybiel says.

Two years later, they found that chronic stress and other factors can also disrupt this signaling and change the choices animals make.

An image of the mouse striatum showing clusters of striosomes (red and yellow). Image: Graybiel lab

Age of ennui

This November, Alexander Friedman, who worked as a research scientist in the Graybiel lab, and Hueske reported in Cell that they found an age-related decline in motivation-modulated learning in mice and rats. Neurons within striosomes became more active than the cells that surround them as animals learned to assign positive and negative values to potential choices. And older mice were less engaged than their younger counterparts in the type of learning required to make these cost-benefit analyses. A similar lack of motivation was observed in a mouse model of Huntington’s disease, a neurodegenerative disorder that is often associated with mood
disturbances in patients.

“This coincides with our previous findings that striosomes are critically important for decisions that involve a conflict.”

“This coincides with our previous findings that striosomes are critically important for decisions that involve a conflict,” says Friedman, who is now an assistant professor at the University of Texas at El Paso.

Graybiel’s team is continuing to investigate these uniquely positioned compartments in the brain, expecting to shed light on the mechanisms that underlie both learning and motivation.

“There’s no learning without motivation, and in fact, motivation can be influenced by learning,” Hueske says. “The more you learn, the more excited you might be to engage in the task. So the two are intertwined.”

The aging brain

Researchers in John Gabrieli’s lab are also seeking to understand the circuits that link motivation to learning, and recently, his team reported that they, too, had found an age-related decline in motivation-modulated learning.

Studies in young adults have shown that memory improves when the brain circuits that process motivation and memory interact. Gabrieli and neurologist Maiya Geddes, who worked in Gabrieli’s lab as a postdoctoral fellow, wondered whether this holds true in older adults, particularly as memory declines.

To find out, the team recruited 40 people to participate in a brain imaging study. About half of the participants were between the ages of 18 and 30, while the others were between the ages of 49 and 84. While inside an fMRI scanner, each participant was asked to commit certain words to memory and told their success would determine how much money they received for participating in the experiment.

Diminished drive

MRI scan
Younger adults show greater activation in the reward-related regions of the brain during incentivized memory tasks compared to older adults. Image: Maiya Geddes

Not surprisingly, when participants were asked 24 hours later to recall the words, the younger group performed better overall than the older group. In young people, incentivized memory tasks triggered activity in parts of the brain involved in both memory and motivation. But in older adults, while these two parts of the brain could be activated independently, they did not seem to be communicating with one another.

“It seemed that the older adults, at least in terms of their brain response, did care about the kind of incentives that we were offering,” says Geddes, who is now an assistant professor at McGill University. “But for whatever reason, that wasn’t allowing them to benefit in terms of improved memory performance.”

Since the study indicates the brain still can anticipate potential rewards, Geddes is now exploring whether other sources of motivation, such as social rewards, might more effectively increase healthful decisions and behaviors in older adults.

Circuit control

Understanding how the brain generates and responds to motivation is not only important for improving learning strategies. Lifestyle choices such as exercise and social engagement can help people preserve cognitive function and improve their quality of life as they age, and Gabrieli says activating the right motivational circuits could help encourage people to implement healthy changes.

By pinpointing these motivational circuits in mice, Graybiel hopes that her research will lead to better treatment strategies for people struggling with motivational challenges, including Parkinson’s disease. Her team is now exploring whether striosomes serve as part of a value-sensitive switch, linking our intentions to dopamine-containing neurons in the midbrain that can modulate our actions.

“Perhaps this motivation is critical for the conflict resolution, and striosomes combine two worlds, dopaminergic motivation and cortical knowledge, resulting in motivation to learn,” Friedman says.

“Now we know that these challenges have a biological basis, and that there are neural circuits that can promote or reduce our feeling of motivational energy,” explains Graybiel. “This realization in itself is a major step toward learning how we can control these circuits both behaviorally and by highly selective therapeutic targeting.”

Powered by viruses

View the interactive version of this story in our Winter 2021 issue of Brain Scan.

Viruses are notoriously adept invaders. The efficiency with which these unseen threats infiltrate tissues, evade immune systems, and occupy the cells of their hosts can be alarming — but it’s exactly why most McGovern neuroscientists keep a stash of viruses in the freezer.

In the hands of neuroscientists, viruses become vital tools for delivering cargo to cells.

With a bit of genetic manipulation, they can instruct neurons to produce proteins that illuminate complex circuitry, report on activity, or place certain cells under scientists’ control. They can even deliver therapies designed to correct genetic defects in patients.

“We rely on the virus to deliver whatever we want,” says McGovern Investigator Guoping Feng. “This is one of the most important technologies in neuroscience.”

Tracing connections

In Ian Wickersham’s lab, researchers are adapting a virus that, in its natural form, is devastating to the mammalian nervous system. Once it gains access to a neuron, the rabies virus spreads to connected cells, killing them within weeks. “That makes it a very dangerous pathogen, but also a very powerful tool for neuroscience,” says Wickersham, a Principal Research Scientist at the Institute.

Taking advantage of its pernicious spread, neuroscientists use a modified version of the rabies virus to introduce a fluorescent protein to infected cells and visualize their connections (above). As a graduate student in Edward Callaway’s lab at the Salk Institute for Biological Studies, Wickersham figured out how to limit the virus’s passage through the nervous system, allowing it to access cells that are directly connected to the neuron it initially infects, but go no further. Rabies virus travels across synapses in the opposite direction of neuronal signals, so researchers can deliver it to a single cell or set of cells, then see exactly where those cells’ inputs are coming from.

Labs around the world use Wickersham’s modified rabies virus to trace neuronal anatomy in the brains of mice. While his team tinkers to make the virus even more powerful, his collaborators have deployed it to map a variety of essential connections, offering clues into how the brain controls movement, detects odors, and retrieves memories.

With the newest tracing tool from the Wickersham lab, moving from anatomical studies to experiments that reveal circuit function is seamless, because the lab has further modified their virus so that it cannot kill cells. Researchers can label connected cells, then proceed to monitor their signals or manipulate their activity in the same animals.

Researchers usually conduct these experiments in genetically modified mice to control the subset of cells that activate the tracing system. It’s the same approach used to restrict most virally-delivered tools to specific neurons, which is crucial, Feng says. When introducing a fluorescent protein for imaging, for example, “we don’t want the gene we deliver to be activated everywhere, otherwise the whole brain will be lighting up,” he says.

Selective targets

In Feng’s lab, research scientist Martin Wienisch is working to make it easier to control this aspect of delivery. Rather than relying on the genetic makeup of an entire animal to determine where a virally-transported gene is switched on, instructions can be programmed directly into the virus, borrowing regulatory sequences that cells already know how to interpret.

Wienisch is scouring the genomes of individual neurons to identify short segments of regulatory DNA called enhancers. He’s focused on those that selectively activate gene expression in just one of hundreds of different neuron types, particularly in animal models that are not very amenable to genetic engineering. “In the real brain, many elements interact to drive cell specific expression. But amazingly sometimes a single enhancer is all we need to get the same effect,” he says.

Researchers are already using enhancers to confine viral tools to select groups of cells, but Wienisch, who is collaborating with Fenna Krienen in Steve McCarroll’s lab at Harvard University, aims to create a comprehensive library. The enhancers they identify will be paired with a variety of genetically-encoded tools and packaged into adeno-associated viruses (AAV), the most widely used vectors in neuroscience. The Feng lab plans to use these tools to better understand the striatum, a part of the primate brain involved in motivation and behavioral choices. “Ideally, we would have a set of AAVs with enhancers that would give us selective access to all the different cell types in the striatum,” Wienisch says.

Enhancers will also be useful for delivering potential gene therapies to patients, Wienisch says. For many years, the Feng lab has been studying how a missing copy of a gene called Shank3 impairs neurons’ ability to communicate, leading to autism and intellectual disability. Now, they are investigating whether they can overcome these deficits by delivering a functional copy of Shank3 to the brain cells that need it. Widespread activation of the therapeutic gene might do more harm than good, but incorporating the right enhancer could ensure it is delivered to the appropriate cells at the right dose, Wienisch says.

Like most gene therapies in development, the therapeutic Shank3, which is currently being tested in animal models, is packaged into an AAV. AAVs safely and efficiently infect human cells, and by selecting the right type, therapies can be directed to specific cells. But AAVs are small viruses, capable of carrying only small genes. Xian Gao, a postdoctoral researcher in the Feng lab, has pared Shank3 down to its most essential components, creating a “minigene” that can be packaged inside the virus, but some things are difficult to fit inside an AAV. Therapies that aim to correct mutations using the CRISPR gene editing system, for example, often exceed the carrying capacity of an AAV.

Expanding options

“There’s been a lot of really phenomenal advances in our gene editing toolkit,” says Victoria Madigan, a postdoctoral researcher in McGovern Investigator Feng Zhang’s lab, where researchers are developing enzymes to more precisely modify DNA. “One of the main limitations of employing these enzymes clinically has been their delivery.”

To open up new options for gene therapy, Zhang and Madigan are working with a group of viruses called densoviruses. Densoviruses and AAVs belong to the same family, but about 50 percent more DNA can be packed inside the outer shell of some densoviruses.

A molecular model of Galleria mellonella densovirus. Image: Victoria Madigan / Zhang Lab

They are an esoteric group of viruses, Madigan says, infecting only insects and crustaceans and perhaps best known for certain members’ ability to devastate shrimp farms. While densoviruses haven’t received a lot of attention from scientists, their similarities to AAV have given the team clues about how to alter their outer capsids to enable them to enter human cells, and even direct them to particular cell types. The fact that they don’t naturally infect people also makes densoviruses promising candidates for clinical use, Madigan says, because patients’ immune systems are unlikely to be primed to reject them. AAV infections, in contrast, are so common that patients are often excluded from clinical trials for AAV-based therapies due to the presence of neutralizing antibodies against the vector.

Ultimately, densoviruses could enable major advances in gene therapy, making it possible to safely deliver sophisticated gene editing systems to patients’ cells, Madigan says — and that’s good reason for scientists to continue exploring the vast diversity in the viral world. “There’s something to be said for looking into viruses that are understudied as new tools,” she says. “There’s a lot of interesting stuff out there — a lot of diversity and thousands of years of evolution.”

Identifying the structure and function of a brain hub

Our ability to pay attention, plan, and trouble-shoot involve cognitive processing by the brain’s prefrontal cortex. The balance of activity among excitatory and inhibitory neurons in the cortex, based on local neural circuits and distant inputs, is key to these cognitive functions.

A recent study from the McGovern Institute shows that excitatory inputs from the thalamus activate a local inhibitory circuit in the prefrontal cortex, revealing new insights into how these cognitive circuits may be controlled.

“For the field, systematic identification of these circuits is crucial in understanding behavioral flexibility and interpreting psychiatric disorders in terms of dysfunction of specific microcircuits,” says postdoctoral associate Arghya Mukherjee, lead author on the report.

Hub of activity

The thalamus is located in the center of the brain and is considered a cerebral hub based on its inputs from a diverse array of brain regions and outputs to the striatum, hippocampus, and cerebral cortex. More than 60 thalamic nuclei (cellular regions) have been defined and are broadly divided into “sensory” or “higher-order” thalamic regions based on whether they relay primary sensory inputs or instead have inputs exclusively from the cerebrum.

Considering the fundamental distinction between the input connections of the sensory and higher-order thalamus, Mukherjee, a researcher in the lab of Michael Halassa, the Class of 1958 Career Development Professor in MIT’s Department of Brain and Cognitive Sciences, decided to explore whether there are similarly profound distinctions in their outputs to the cerebral cortex.

He addressed this question in mice by directly comparing the outputs of the medial geniculate body (MGB), a sensory thalamic region, and the mediodorsal thalamus (MD), a higher-order thalamic region. The researchers selected these two regions because the relatively accessible MGB nucleus relays auditory signals to cerebral cortical regions that process sound, and the MD interconnects regions of the prefrontal cortex.

Their study, now available as a preprint in eLife, describes key functional and anatomical differences between these two thalamic circuits. These findings build on Halassa’s previous work showing that outputs from higher-order thalamic nuclei play a central role in cognitive processing.

A side by side comparison of the two microcircuits: (Left) MD receives its primary inputs (black) from the frontal cortex and sends back inhibition dominant outputs to multiple layers of the prefrontal cortex. (Right) MGB receives its primary input (black) from the auditory midbrain and acts as a ‘relay’ by sending excitation dominant outputs specifically to layer 4 of the auditory cortex. Image: Arghya Mukherjee

Circuit analysis

Using cutting-edge stimulation and recording methods, the researchers found that neurons in the prefrontal and auditory cortices have dramatically different responses to activation of their respective MD and MGB inputs.

The researchers stimulated the MD-prefrontal and MGB-auditory cortex circuits using optogenetic technology and recorded the response to this stimulation with custom multi-electrode scaffolds that hold independently movable micro-drives for recording hundreds of neurons in the cortex. When MGB neurons were stimulated with light, there was strong activation of neurons in the auditory cortex. By contrast, MD stimulation caused a suppression of neuron firing in the prefrontal cortex and concurrent activation of local inhibitory interneurons. The separate activation of the two thalamocortical circuits had dramatically different impacts on cortical output, with the sensory thalamus seeming to promote feed-forward activity and the higher-order thalamus stimulating inhibitory microcircuits within the cortical target region.

“The textbook view of the thalamus is an excitatory cortical input, and the fact that turning on a thalamic circuit leads to a net cortical inhibition was quite striking and not something you would have expected based on reading the literature,” says Halassa, who is also an associate investigator at the McGovern Institute. “Arghya and his colleagues did an amazing job following that up with detailed anatomy to explain why might this effect be so.”

Anatomical differences

Using a system called GFP (green fluorescent protein) reconstitution across synaptic partners (mGRASP), the researchers demonstrated that MD and MGB projections target different types of cortical neurons, offering a possible explanation for their differing effects on cortical activity.

With mGRASP, the presynaptic terminal (in this case, MD or MGB) expresses one part of the fluorescent protein and the postsynaptic neuron (in this case, prefrontal or auditory cortex) expresses the other part of the fluorescent protein, which by themselves alone do not fluoresce. Only when there is a close synaptic connection do the two parts of GFP come together to become fluorescent. These experiments showed that MD neurons synapse more frequently onto inhibitory interneurons in the prefrontal cortex whereas MGB neurons synapse onto excitatory neurons with larger synapses, consistent with only MGB being a strong activity driver.

Using fluorescent viral vectors that can cross synapses of interconnected neurons, a technology developed by McGovern principal research scientist Ian Wickersham, the researchers were also able to map the inputs to the MD and MGB thalamic regions. Viruses, like rabies, are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through synaptic junctions.

The inputs to the targeted higher-order and sensory thalamocortical neurons identified across the brain appeared to arise respectively from forebrain and midbrain sensory regions, as expected. The MGB inputs were consistent with a sensory relay function, arising primarily from the auditory input pathway. By contrast, MD inputs arose from a wide array of cerebral cortical regions and basal ganglia circuits, consistent with MD receiving contextual and motor command information.

Direct comparisons

By directly comparing these microcircuits, the Halassa lab has revealed important clues about the function and anatomy of these sensory and higher-order brain connections. It is only through a systematic understanding of these circuits that we can begin to interpret how their dysfunction may contribute to psychiatric disorders like schizophrenia.

It is this basic scientific inquiry that often fuels their research, says Halassa. “Excitement about science is part of the glue that holds us all together.”

Study helps explain why motivation to learn declines with age

As people age, they often lose their motivation to learn new things or engage in everyday activities. In a study of mice, MIT neuroscientists have now identified a brain circuit that is critical for maintaining this kind of motivation.

This circuit is particularly important for learning to make decisions that require evaluating the cost and reward that come with a particular action. The researchers showed that they could boost older mice’s motivation to engage in this type of learning by reactivating this circuit, and they could also decrease motivation by suppressing the circuit.

“As we age, it’s harder to have a get-up-and-go attitude toward things,” says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research. “This get-up-and-go, or engagement, is important for our social well-being and for learning — it’s tough to learn if you aren’t attending and engaged.”

Graybiel is the senior author of the study, which appears today in Cell. The paper’s lead authors are Alexander Friedman, a former MIT research scientist who is now an assistant professor at the University of Texas at El Paso, and Emily Hueske, an MIT research scientist.

Evaluating cost and benefit

The striatum is part of the basal ganglia — a collection of brain centers linked to habit formation, control of voluntary movement, emotion, and addiction. For several decades, Graybiel’s lab has been studying clusters of cells called striosomes, which are distributed throughout the striatum. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

In recent years, Friedman, Graybiel, and colleagues including MIT research fellow Ken-ichi Amemori have discovered that striosomes play an important role in a type of decision-making known as approach-avoidance conflict. These decisions involve choosing whether to take the good with the bad — or to avoid both — when given options that have both positive and negative elements. An example of this kind of decision is having to choose whether to take a job that pays more but forces a move away from family and friends. Such decisions often provoke great anxiety.

In a related study, Graybiel’s lab found that striosomes connect to cells of the substantia nigra, one of the brain’s major dopamine-producing centers. These studies led the researchers to hypothesize that striosomes may be acting as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to act. These actions can then be invigorated by the dopamine-producing cells.

The researchers later discovered that chronic stress has a major impact on this circuit and on this kind of emotional decision-making. In a 2017 study performed in rats and mice, they showed that stressed animals were far more likely to choose high-risk, high-payoff options, but that they could block this effect by manipulating the circuit.

In the new Cell study, the researchers set out to investigate what happens in striosomes as mice learn how to make these kinds of decisions. To do that, they measured and analyzed the activity of striosomes as mice learned to choose between positive and negative outcomes.

During the experiments, the mice heard two different tones, one of which was accompanied by a reward (sugar water), and another that was paired with a mildly aversive stimulus (bright light). The mice gradually learned that if they licked a spout more when they heard the first tone, they would get more of the sugar water, and if they licked less during the second, the light would not be as bright.

Learning to perform this kind of task requires assigning value to each cost and each reward. The researchers found that as the mice learned the task, striosomes showed higher activity than other parts of the striatum, and that this activity correlated with the mice’s behavioral responses to both of the tones. This suggests that striosomes could be critical for assigning subjective value to a particular outcome.

“In order to survive, in order to do whatever you are doing, you constantly need to be able to learn. You need to learn what is good for you, and what is bad for you,” Friedman says.

“A person, or this case a mouse, may value a reward so highly that the risk of experiencing a possible cost is overwhelmed, while another may wish to avoid the cost to the exclusion of all rewards. And these may result in reward-driven learning in some and cost-driven learning in others,” Hueske says.

The researchers found that inhibitory neurons that relay signals from the prefrontal cortex help striosomes to enhance their signal-to-noise ratio, which helps to generate the strong signals that are seen when the mice evaluate a high-cost or high-reward option.

Loss of motivation

Next, the researchers found that in older mice (between 13 and 21 months, roughly equivalent to people in their 60s and older), the mice’s engagement in learning this type of cost-benefit analysis went down. At the same time, their striosomal activity declined compared to that of younger mice. The researchers found a similar loss of motivation in a mouse model of Huntington’s disease, a neurodegenerative disorder that affects the striatum and its striosomes.

When the researchers used genetically targeted drugs to boost activity in the striosomes, they found that the mice became more engaged in performance of the task. Conversely, suppressing striosomal activity led to disengagement.

In addition to normal age-related decline, many mental health disorders can skew the ability to evaluate the costs and rewards of an action, from anxiety and depression to conditions such as PTSD. For example, a depressed person may undervalue potentially rewarding experiences, while someone suffering from addiction may overvalue drugs but undervalue things like their job or their family.

The researchers are now working on possible drug treatments that could stimulate this circuit, and they suggest that training patients to enhance activity in this circuit through biofeedback could offer another potential way to improve their cost-benefit evaluations.

“If you could pinpoint a mechanism which is underlying the subjective evaluation of reward and cost, and use a modern technique that could manipulate it, either psychiatrically or with biofeedback, patients may be able to activate their circuits correctly,” Friedman says.

The research was funded by the CHDI Foundation, the Saks Kavanaugh Foundation, the National Institutes of Health, the Nancy Lurie Marks Family Foundation, the Bachmann-Strauss Dystonia and Parkinson’s Foundation, the William N. and Bernice E. Bumpus Foundation, the Simons Center for the Social Brain, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, Michael Stiefel, and Robert Buxton.

Researchers ID crucial brain pathway involved in object recognition

MIT researchers have identified a brain pathway critical in enabling primates to effortlessly identify objects in their field of vision. The findings enrich existing models of the neural circuitry involved in visual perception and help to further unravel the computational code for solving object recognition in the primate brain.

Led by Kohitij Kar, a postdoctoral associate at the McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, the study looked at an area called the ventrolateral prefrontal cortex (vlPFC), which sends feedback signals to the inferior temporal (IT) cortex via a network of neurons. The main goal of this study was to test how the back and forth information processing of this circuitry, that is, this recurrent neural network, is essential to rapid object identification in primates.

The current study, published in Neuron and available today via open access, is a follow-up to prior work published by Kar and James DiCarlo, Peter de Florez Professor of Neuroscience, the head of MIT’s Department of Brain and Cognitive Sciences, and an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines.

Monkey versus machine

In 2019, Kar, DiCarlo, and colleagues identified that primates must use some recurrent circuits during rapid object recognition. Monkey subjects in that study were able to identify objects more accurately than engineered “feedforward” computational models, called deep convolutional neural networks, that lacked recurrent circuitry.

Interestingly, specific images for which models performed poorly compared to monkeys in object identification, also took longer to be solved in the monkeys’ brains — suggesting that the additional time might be due to recurrent processing in the brain. Based on the 2019 study, it remained unclear though exactly which recurrent circuits were responsible for the delayed information boost in the IT cortex. That’s where the current study picks up.

“In this new study, we wanted to find out: Where are these recurrent signals in IT coming from?” Kar said. “Which areas reciprocally connected to IT, are functionally the most critical part of this recurrent circuit?”

To determine this, researchers used a pharmacological agent to temporarily block the activity in parts of the vlPFC in macaques while they engaged in an object discrimination task. During these tasks, monkeys viewed images that contained an object, such as an apple, a car, or a dog; then, researchers used eye tracking to determine if the monkeys could correctly indicate what object they had previously viewed when given two object choices.

“We observed that if you use pharmacological agents to partially inactivate the vlPFC, then both the monkeys’ behavior and IT cortex activity deteriorates but more so for certain specific images. These images were the same ones we identified in the previous study — ones that were poorly solved by ‘feedforward’ models and took longer to be solved in the monkey’s IT cortex,” said Kar.

MIT researchers used an object recognition task (e.g., recognizing that there is a “bird” and not an “elephant” in the shown image) in studying the role of feedback from primate ventrolateral prefrontal cortex (vlPFC) to the inferior temporal (IT) cortex via a network of neurons. In primate brains, temporally blocking the vlPFC (green shaded area) disrupts the recurrent neural network comprising vlPFC and IT inducing specific deficits, implicating its role in rapid object identification. Image: Kohitij Kar, brain image adapted from SciDraw

“These results provide evidence that this recurrently connected network is critical for rapid object recognition, the behavior we’re studying. Now, we have a better understanding of how the full circuit is laid out, and what are the key underlying neural components of this behavior.”

The full study, entitled “Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition,” will run in print January 6, 2021.

“This study demonstrates the importance of pre-frontal cortical circuits in automatically boosting object recognition performance in a very particular way,” DiCarlo said. “These results were obtained in nonhuman primates and thus are highly likely to also be relevant to human vision.”

The present study makes clear the integral role of the recurrent connections between the vlPFC and the primate ventral visual cortex during rapid object recognition. The results will be helpful to researchers designing future studies that aim to develop accurate models of the brain, and to researchers who seek to develop more human-like artificial intelligence.

Tool developed in Graybiel lab reveals new clues about Parkinson’s disease

As the brain processes information, electrical charges zip through its circuits and neurotransmitters pass molecular messages from cell to cell. Both forms of communication are vital, but because they are usually studied separately, little is known about how they work together to control our actions, regulate mood, and perform the other functions of a healthy brain.

Neuroscientists in Ann Graybiel’s laboratory at MIT’s McGovern Institute are taking a closer look at the relationship between these electrical and chemical signals. “Considering electrical signals side by side with chemical signals is really important to understand how the brain works,” says Helen Schwerdt, a postdoctoral researcher in Graybiel’s lab. Understanding that relationship is also crucial for developing better ways to diagnose and treat nervous system disorders and mental illness, she says, noting that the drugs used to treat these conditions typically aim to modulate the brain’s chemical signaling, yet studies of brain activity are more likely to focus on electrical signals, which are easier to measure.

Schwerdt and colleagues in Graybiel’s lab have developed new tools so that chemical and electrical signals can, for the first time, be measured simultaneously in the brains of primates. In a study published September 25, 2020, in Science Advances, they used those tools to reveal an unexpectedly complex relationship between two types of signals that are disrupted in patients with Parkinson’s disease—dopamine signaling and coordinated waves of electrical activity known as beta-band oscillations.

Complicated relationship

Graybiel’s team focused its attention on beta-band activity and dopamine signaling because studies of patients with Parkinson’s disease had suggested a straightforward inverse relationship between the two. The tremors, slowness of movement, and other symptoms associated with the disease develop and progress as the brain’s production of the neurotransmitter dopamine declines, and at the same time, beta-band oscillations surge to abnormal levels. Beta-band oscillations are normally observed in parts of the brain that control movement when a person is paying attention or planning to move. It’s not clear what they do or why they are disrupted in patients with Parkinson’s disease. But because patients’ symptoms tend to be worst when beta activity is high—and because beta activity can be measured in real time with sensors placed on the scalp or with a deep-brain stimulation device that has been implanted for treatment, researchers have been hopeful that it might be useful for monitoring the disease’s progression and patients’ response to treatment. In fact, clinical trials are already underway to explore the effectiveness of modulating deep-brain stimulation treatment based on beta activity.

When Schwerdt and colleagues examined these two types of signals in the brains of rhesus macaques, they discovered that the relationship between beta activity and dopamine is more complicated than previously thought.

Their new tools allowed them to simultaneously monitor both signals with extraordinary precision, targeting specific parts of the striatum—a region deep within the brain involved in controlling movement, where dopamine is particularly abundant—and taking measurements on the millisecond time scale to capture neurons’ rapid-fire communications.

They took these measurements as the monkeys performed a simple task, directing their gaze in a particular direction in anticipation of a reward. This allowed the researchers to track chemical and electrical signaling during the active, motivated movement of the animals’ eyes. They found that beta activity did increase as dopamine signaling declined—but only in certain parts of the striatum and during certain tasks. The reward value of a task, an animal’s past experiences, and the particular movement the animal performed all impacted the relationship between the two types of signals.

Multi-modal systems allow subsecond recording of chemical and electrical neural signals in the form of dopamine molecular concentrations and beta-band local field potentials (beta LFPs), respectively. Online measurements of dopamine and beta LFP (time-dependent traces displayed in box on right) were made in the primate striatum (caudate nucleus and putamen colored in green and purple, respectively, in the left brain image) as the animal was performing a task in which eye movements were made to cues displayed on the left (purple event marker line) and right (green event) of a screen in order to receive large or small amounts of food reward (red and blue events). Dopamine and beta LFP neural signals are centrally implicated in Parkinson’s disease and other brain disorders. Image: Helen Schwerdt

“What we expected is there in the overall view, but if we just look at a different level of resolution, all of a sudden the rules don’t hold,” says Graybiel, who is also an MIT Institute Professor. “It doesn’t destroy the likelihood that one would want to have a treatment related to this presumed opposite relationship, but it does say there’s something more here that we haven’t known about.”

The researchers say it’s important to investigate this more nuanced relationship between dopamine signaling and beta activity, and that understanding it more deeply might lead to better treatments for patients with Parkinson’s disease and related disorders. While they plan to continue to examine how the two types of signals relate to one another across different parts of the brain and under different behavioral conditions, they hope that other teams will also take advantage of the tools they have developed. “As these methods in neuroscience become more and more precise and dazzling in their power, we’re bound to discover new things,” says Graybiel.

This study was supported by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the National Science Foundation, Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Robert Buxton.

How general anesthesia reduces pain

General anesthesia is medication that suppresses pain and renders patients unconscious during surgery, but whether pain suppression is simply a side effect of loss of consciousness has been unclear. Fan Wang and colleagues have now identified the circuits linked to pain suppression under anesthesia in mouse models, showing that this effect is separable from the unconscious state itself.

“Existing literature suggests that the brain may contain a switch that can turn off pain perception,” explains Fan Wang, a professor at Duke University and lead author of the study. “I had always wanted to find this switch, and it occurred to me that general anesthetics may activate this switch to produce analgesia.”

Wang, who will join the McGovern Institute in January 2021, set out to test this idea with her student, Thuy Hua, and postdoc, Bin Chen.

Pain suppressor

Loss of pain, or analgesia, is an important property of anesthetics that helps to make surgical and invasive medical procedures humane and bearable. In spite of their long use in the medical world, there is still very little understanding of how anesthetics work. It has generally been assumed that a side effect of loss of consciousness is analgesia, but several recent observations have brought this idea into question, and suggest that changes in consciousness might be separable from pain suppression.

A key clue that analgesia is separable from general anesthesia comes from the accounts of patients that regain consciousness during surgery. After surgery, these patients can recount conversations between staff or events that occurred in the operating room, despite not feeling any pain. In addition, some general anesthetics, such as ketamine, can be deployed at low concentrations for pain suppression without loss of consciousness.

Following up on these leads, Wang and colleagues set out to uncover which neural circuits might be involved in suppressing pain during exposure to general anesthetics. Using CANE, a procedure developed by Wang that can detect which neurons activate in response to an event, Wang discovered a new population of GABAergic neurons activated by general anesthetic in the mouse central amygdala.

These neurons become activated in response to different anesthetics, including ketamine, dexmedetomidine, and isoflurane. Using optogenetics to manipulate the activity state of these neurons, Wang and her lab found that they led to marked changes in behavioral responses to painful stimuli.

“The first time we used optogenetics to turn on these cells, a mouse that was in the middle of taking care of an injury simply stopped and started walked around with no sign of pain,” Wang explains.

Specifically, activating these cells blocks pain in multiple models and tests, whereas inhibiting these neurons rendered mice aversive to gentle touch — suggesting that they are involved in a newly uncovered central pain circuit.

The study has implications for both anesthesia and pain. It shows that general anesthetics have complex, multi-faceted effects and that the brain may contain a central pain suppression system.

“We want to figure out how diverse general anesthetics activate these neurons,” explains Wang. “That way we can find compounds that can specifically activate these pain-suppressing neurons without sedation. We’re now also testing whether placebo analgesia works by activating these same central neurons.”

The study also has implications for addiction as it may point to an alternative system for central pain suppression that could be a target of drugs that do not have the devastating side effects of opioids.