Explaining repetitive behavior linked to amphetamine use

Repetitive movements such as nail-biting and pacing are very often seen in humans and animals under the influence of habit-forming drugs. Studies at the McGovern Institute have found that these repetitive behaviors may be due to a breakdown in communication between neurons in the striatum – a deep brain region linked to habit and movement, among other functions.

The Graybiel lab has a long-standing interest in habit formation and the effects of addiction on brain circuits related to the striatum, a key part of the basal ganglia. The Graybiel lab previously found remarkably strong correlations between gene expression levels in specific parts of the striatum and exposure to psychomotor stimulants such as amphetamine and cocaine. The longer the exposure to stimulant, the more repetitive behavior in models, and the more brain circuits changed. These findings held across animal models.

The lab has found that if they train animals to develop habits, they can completely block these repetitive behaviors using targeted inhibition or excitation of the circuits. They even could block repetitive movement patterns in a mouse model of obsessive-compulsive disorder (OCD). These experiments mimicked situations in humans in which drugs or anxiety-inducing experiences can lead to habits and repetitive movement patterns—from nail-biting to much more dangerous habitual actions.

Ann Graybiel (right) at work in the lab with research scientist Jill Crittenden. Photo: Justin Knight

Why would these circuits exist in the brain if they so often produce “bad” habits and destructive behaviors, as seen in compulsive use of drugs such as opioids or even marijuana? One answer is that we have to be flexible and ready to switch our behavior if something dangerous occurs in the environment. Habits and addictions are, in a way, the extreme pushing of this flexible system in the other direction, toward the rigid and repetitive.

“One important clue is that for many of these habits and repetitive and addictive behaviors, the person isn’t even aware that they are doing the same thing again and again. And if they are not aware, they can’t control themselves and stop,” explains Ann Graybiel, an Institute Professor at MIT. “It is as though the ‘rational brain’ has great difficulty in controlling the ‘habit circuits’ of the brain.” Understanding loss of communication is a central theme in much of the Graybiel lab’s work.

Graybiel, who is also a founding member of the McGovern Institute, is now trying to understand the underlying circuits at the cellular level. The lab is examining the individual components of the striatal circuits linked to selecting actions and motivating movement, circuits that seem to be directly controlled by drugs of abuse.

In groundbreaking early work, Graybiel discovered that the striatum has distinct compartments, striosomes and matrix. These regions are spatially and functionally distinct and separately connect, through striatal projection neurons (SPNs), to motor-control centers or to neurons that release dopamine, a neurotransmitter linked to all drugs of abuse. It is in these components that Graybiel and colleagues have more recently found strong effects of drugs. Indeed opposite changes in gene expression in the striosome SPNs versus the matrix SPNs, raises the possibility that an imbalance in gene regulation leads to abnormally inflexible behaviors caused by drug use.

“It was known that cholinergic interneurons tend to reside along the borders of the two striatal compartments, but whether this cell type mediates communication between the compartments was unknown,” explains first author Jill Crittenden, a research scientist in the Graybiel lab. “We wanted to know whether cholinergic signaling to the two compartments is disrupted by drugs that induce abnormally repetitive behaviors.”

Amphetamine drives gene transcription in striosomes. The top panel shows striosomes (red) are disticnt from matrix (green). Amphetamine treatment activates lead to markers of activation (the immediate early gene c-Fos, red in 2 lower panels) in drug-treated animals (bottom panel), but not controls (middle panel). Image: Jill Crittenden

It was known that cholinergic interneurons are activated by important environmental cues and promote flexible rather than repetitive behavior, how this is related to interaction with SPNs in the striatum was unclear. “Using high-resolution microscopy,” explains Crittenden, “we could see for the first time that cholinergic interneurons send many connections to both striosome and matrix SPNs, well-placed to coordinate signaling directly across the two striatal compartments that appear otherwise isolated.”

Using a technique known as optogenetics, the Graybiel group stimulated mouse cholinergic interneurons and monitored the effects on striatal SPNs in brain tissue. They found that stimulating the interneurons inhibited the ongoing signaling activity that was induced by current injection in matrix and striatal SPNs. However, when examining the brains of animals on high doses of amphetamine and that were displaying repetitive behavior, stimulating the relevant interneurons failed to interrupt evoked activity in SPNs.

Using an inhibitor, the authors were able to show that these neural pathways depend on the nicotinic acetylcholine receptor. Inhibiting this cell-surface signaling receptor had a similar effect to drug intoxication on intercommunication among striatal neurons. Since break down of cholinergic interneuron signaling across striosome and matrix compartments under drug intoxication may reduce behavioral flexibility and cue responsiveness, the work suggests one mechanism for how drugs of abuse hijack action-selection systems of the brain and drive pathological habit-formation.

The Graybiel lab is excited that they can now manipulate these behaviors by manipulating very particular circuits components in the habit circuits. Most recently they have discovered that they can even fully block the effects of stress by manipulating cellular components of these circuits. They now hope to dive deep into these circuits to find out the mystery of how to control them.

“We hope that by pinpointing these circuit elements—which seem to have overlapping effects on habit formation, addiction and stress, we help to guide the development of better therapies for addiction,” explains Graybiel. “We hope to learn about what the use of drugs does to brain circuits with both short term use and long term use. This is an urgent need.”

CRISPR makes several Discovery of the Decade lists

As we reach milestones in time, it’s common to look back and review what we learned. A number of media outlets, including National Geographic, NPR, The Hill, Popular Mechanics, Smithsonian Magazine, Nature, Mental Floss, CNBC, and others, recognized the profound impact of genome editing, adding CRISPR to their discovery of the decade lists.

“In 2013, [CRISPR] was used for genome editing in a eukaryotic cell, forever altering the course of biotechnology and, ultimately our relationship with our DNA.”
— Popular Mechanics

It’s rare for a molecular system to become a household name, but in less than a decade, CRISPR has done just that. McGovern Investigator Feng Zhang played a key role in leveraging CRISPR, an immune system found originally in prokaryotic – bacterial and archaeal – cells, into a broadly customizable toolbox for genomic manipulation in eukaryotic (animal and plant) cells. CRISPR allows scientists to easily and quickly make changes to genomes, has revolutionized the biomedical sciences, and has major implications for control of infectious disease, agriculture, and treatment of genetic disorders.

Nancy Kanwisher to receive George A. Miller Prize in Cognitive Neuroscience

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named this year’s winner of the George A. Miller Prize in Cognitive Neuroscience. The award, given annually by the Cognitive Neuroscience Society (CNS), recognizes individuals “whose distinguished research is at the cutting-edge of their discipline with realized or future potential, to revolutionize cognitive neuroscience.”

Kanwisher studies the functional organization of the human mind and, over the last 20 years, her lab has played a central role in the identification of several dozen regions of the cortex in humans that are engaged in particular components of perception and cognition. She is perhaps best known for identifying brain regions specialized for recognizing faces.

Kanwisher will deliver her prize lecture, “Functional imaging of the human brain: A window into the architecture of the mind” at the 2020 CNS annual meeting in Boston this March.

Brain biomarkers predict mood and attention symptoms

Mood and attentional disorders amongst teens are an increasing concern, for parents, society, and for peers. A recent Pew research center survey found conditions such as depression and anxiety to be the number one concern that young students had about their friends, ranking above drugs or bullying.

“We’re seeing an epidemic in teen anxiety and depression,” explains McGovern Research Affiliate Susan Whitfield-Gabrieli.

“Scientists are finding a huge increase in suicide ideation and attempts, something that hit home for me as a mother of teens. Emergency rooms in hospitals now have guards posted outside doors of these teenagers that attempted suicide—this is a pressing issue,” explains Whitfield-Gabrieli who is also director of the Northeastern University Biomedical Imaging Center and a member of the Poitras Center for Psychiatric Disorders Research.

Finding new methods for discovering early biomarkers for risk of psychiatric disorders would allow early interventions and avoid reaching points of crisis such as suicide ideation or attempts. In research published recently in JAMA Psychiatry, Whitfield-Gabrieli and colleagues found that signatures predicting future development of depression and attentional symptoms can be detected in children as young as seven years old.

Long-term view

While previous work had suggested that there may be biomarkers that predict development of mood and attentional disorders, identifying early biomarkers prior to an onset of illness requires following a cohort of pre-teens from a young age, and monitoring them across years. This effort to have a proactive, rather than reactive, approach to the development of symptoms associated with mental disorders is exactly the route Whitfield-Gabrieli and colleagues took.

“One of the exciting aspects of this study is that the cohort is not pre-selected for already having symptoms of psychiatric disorders themselves or even in their family,” explained Whitfield-Gabrieli. “It’s an unbiased cohort that we followed over time.”

McGovern research affiliate Susan Whitfield-Gabrieli has discovered early brain biomarkers linked to psychiatric disorders.

In some past studies, children were pre-selected, for example a major depressive disorder diagnosis in the parents, but Whitfield-Gabrieli and colleagues, Silvia Bunge from Berkeley and Laurie Cutting from Vanderbilt, recruited a range of children without preconditions, and examined them at age 7, then again 4 years later. The researchers examined resting state functional connectivity, and compared this to scores on the child behavioral checklist (CBCL), allowing them to relate differences in the brain to a standardized analysis of behavior that can be linked to psychiatric disorders. The CBCL is used both in research and in the clinic and his highly predictive of disorders including ADHD, so that changes in the brain could be related to changes in a widely used clinical scoring system.

“Over the four years, some people got worse, some got better, and some stayed the same according the CBCL. We could relate this directly to differences in brain networks, and could identify at age 7 who would get worse,” explained Whitfield-Gabrieli.

Brain network changes

The authors analyzed differences in resting state network connectivity, regions across the brain that rise and fall in activity level together, as visualized using fMRI. Reduced connectivity between these regions may allow us to get a handle on reduced “top-down” control of neural circuits. The dorsolateral prefrontal region is linked to executive function, external attention, and emotional control. Increased connection with the medial prefrontal cortex is known to be present in attention deficit hyperactivity disorder (ADHD), while a reduced connection to a different brain region, the sgACC, is seen in major depressive disorder. The question remained as to whether these changes can be seen prior to the onset of diagnosable attentional or mood disorders.

Whitfield-Gabrieli and colleagues found that these resting state networks varied in the brains of children that would later develop anxiety/depression and ADHD symptoms. Weaker scores in connectivity between the dorsolateral and medial prefrontal cortical regions tended to be seen in children whose attention scores went on to improve. Analysis of the resting state networks above could differentiate those who would have typical attentional behavior by age 11 versus those that went on to develop ADHD.

Whitfield-Gabrieli has replicated this finding in an independent sample of children and she is continuing to expand the analysis and check the results, as well as follow this cohort into the future. Should changes in resting state networks be a consistent biomarker, the next step is to initiate interventions prior to the point of crisis.

“We’ve recently been able to use mindfulness interventions, and show these reduce self-perceived stress and amygdala activation in response to fear, and we are also testing the effect of exercise interventions,” explained Whitfield-Gabrieli. “The hope is that by using predictive biomarkers we can augment children’s lifestyles with healthy interventions that can prevent risk converting to a psychiatric disorder.”

Can fMRI reveal insights into addiction and treatments?

Many debilitating conditions like depression and addiction have biological signatures hidden in the brain well before symptoms appear.  What if brain scans could be used to detect these hidden signatures and determine the most optimal treatment for each individual? McGovern Investigator John Gabrieli is interested in this question and wrote about the use of imaging technologies as a predictive tool for brain disorders in a recent issue of Scientific American.

page from Scientific American article
McGovern Investigator John Gabrieli pens a story for Scientific American about the potential for brain imaging to predict the onset of mental illness.

“Brain scans show promise in predicting who will benefit from a given therapy,” says Gabrieli, who is also the Grover Hermann Professor in Brain and Cognitive Sciences at MIT. “Differences in neural activity may one day tell clinicians which depression treatment will be most effective for an individual or which abstinent alcoholics will relapse.”

Gabrieli cites research which has shown that half of patients treated for alcohol abuse go back to drinking within a year of treatment, and similar reversion rates occur for stimulants such as cocaine. Failed treatments may be a source of further anxiety and stress, Gabrieli notes, so any information we can glean from the brain to pinpoint treatments or doses that would help would be highly informative.

Current treatments rely on little scientific evidence to support the length of time needed in a rehabilitation facility, he says, but “a number suggest that brain measures might foresee who will succeed in abstaining after treatment has ended.”

Further data is needed to support this idea, but Gabrieli’s Scientific American piece makes the case that the use of such a technology may be promising for a range of addiction treatments including abuse of alcohol, nicotine, and illicit drugs.

Gabrieli also believes brain imaging has the potential to reshape education. For example, educational interventions targeting dyslexia might be more effective if personalized to specific differences in the brain that point to the source of the learning gap.

But for the prediction sciences to move forward in mental health and education, he concludes, the research community must design further rigorous studies to examine these important questions.

Differences between deep neural networks and human perception

When your mother calls your name, you know it’s her voice — no matter the volume, even over a poor cell phone connection. And when you see her face, you know it’s hers — if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.

Deep neural networks also have performed speech recognition and image classification tasks with impressive robustness to variations in the auditory or visual stimuli. But are the invariances learned by these models similar to the invariances learned by human perceptual systems? A group of MIT researchers has discovered that they are different. They presented their findings yesterday at the 2019 Conference on Neural Information Processing Systems.

The researchers made a novel generalization of a classical concept: “metamers” — physically distinct stimuli that generate the same perceptual effect. The most famous examples of metamer stimuli arise because most people have three different types of cones in their retinae, which are responsible for color vision. The perceived color of any single wavelength of light can be matched exactly by a particular combination of three lights of different colors — for example, red, green, and blue lights. Nineteenth-century scientists inferred from this observation that humans have three different types of bright-light detectors in our eyes. This is the basis for electronic color displays on all of the screens we stare at every day. Another example in the visual system is that when we fix our gaze on an object, we may perceive surrounding visual scenes that differ at the periphery as identical. In the auditory domain, something analogous can be observed. For example, the “textural” sound of two swarms of insects might be indistinguishable, despite differing in the acoustic details that compose them, because they have similar aggregate statistical properties. In each case, the metamers provide insight into the mechanisms of perception, and constrain models of the human visual or auditory systems.

In the current work, the researchers randomly chose natural images and sound clips of spoken words from standard databases, and then synthesized sounds and images so that deep neural networks would sort them into the same classes as their natural counterparts. That is, they generated physically distinct stimuli that are classified identically by models, rather than by humans. This is a new way to think about metamers, generalizing the concept to swap the role of computer models for human perceivers. They therefore called these synthesized stimuli “model metamers” of the paired natural stimuli. The researchers then tested whether humans could identify the words and images.

“Participants heard a short segment of speech and had to identify from a list of words which word was in the middle of the clip. For the natural audio this task is easy, but for many of the model metamers humans had a hard time recognizing the sound,” explains first-author Jenelle Feather, a graduate student in the MIT Department of Brain and Cognitive Sciences (BCS) and a member of the Center for Brains, Minds, and Machines (CBMM). That is, humans would not put the synthetic stimuli in the same class as the spoken word “bird” or the image of a bird. In fact, model metamers generated to match the responses of the deepest layers of the model were generally unrecognizable as words or images by human subjects.

Josh McDermott, associate professor in BCS and investigator in CBMM, makes the following case: “The basic logic is that if we have a good model of human perception, say of speech recognition, then if we pick two sounds that the model says are the same and present these two sounds to a human listener, that human should also say that the two sounds are the same. If the human listener instead perceives the stimuli to be different, this is a clear indication that the representations in our model do not match those of human perception.”

Joining Feather and McDermott on the paper are Alex Durango, a post-baccalaureate student, and Ray Gonzalez, a research assistant, both in BCS.

There is another type of failure of deep networks that has received a lot of attention in the media: adversarial examples (see, for example, “Why did my classifier just mistake a turtle for a rifle?“). These are stimuli that appear similar to humans but are misclassified by a model network (by design — they are constructed to be misclassified). They are complementary to the stimuli generated by Feather’s group, which sound or appear different to humans but are designed to be co-classified by the model network. The vulnerabilities of model networks exposed to adversarial attacks are well-known — face-recognition software might mistake identities; automated vehicles might not recognize pedestrians.

The importance of this work lies in improving models of perception beyond deep networks. Although the standard adversarial examples indicate differences between deep networks and human perceptual systems, the new stimuli generated by the McDermott group arguably represent a more fundamental model failure — they show that generic examples of stimuli classified as the same by a deep network produce wildly different percepts for humans.

The team also figured out ways to modify the model networks to yield metamers that were more plausible sounds and images to humans. As McDermott says, “This gives us hope that we may be able to eventually develop models that pass the metamer test and better capture human invariances.”

“Model metamers demonstrate a significant failure of present-day neural networks to match the invariances in the human visual and auditory systems,” says Feather, “We hope that this work will provide a useful behavioral measuring stick to improve model representations and create better models of human sensory systems.”

Brain science in the Bolivian rainforest

Malinda McPherson headshot
Graduate student Malinda McPherson. Photo: Caitlin Cunningham

Malinda McPherson is a graduate student in Josh McDermott‘s lab, studying how people hear pitch (how high or low a sound is) in both speech and music.

To test the extent to which human audition varies across cultures, McPherson travels with the McDermott lab to Bolivia to study the Tsimane’ — a native Amazonian society with minimal exposure to Western culture.

Their most recent study, published in the journal Current Biology, found a striking variation in perception of musical pitch across cultures.

In this Q&A, we ask McPherson what motivates her research and to describe some of the challenges she has experienced working in the Bolivian rainforest. 

What are you working on now?

Right now, I’m particularly excited about a project that involves working with children; we are trying to better understand how the ability to hear pitch develops with age and experience. Difficulty hearing pitch is one of the first issues that most people with poor or corrected hearing find discouraging, so in addition to simply being an interesting basic component of audition, understanding how pitch perception develops may be useful in engineering assistive hearing devices.

How has your personal background inspired your research?

I’ve been an avid violist for over twenty years and still perform with the Chamber Music Society at MIT. When I was an undergraduate and deciding between a career as a professional musician and a career in science, I found a way to merge the two by working as a research assistant in a lab studying musical creativity. I worked in that lab for three years and was completely hooked. My musical training has definitely helped me design a few experiments!

What was your most challenging experience in Bolivia?  Most rewarding?

The most challenging aspect of our fieldwork in Bolivia is sustaining our intensity over a period of 4-5 weeks.  Every moment is precious, and the pace of work is both exhilarating and exhausting. Despite the long hours of work and travel (by canoe or by truck over very bumpy roads), it is an incredible privilege to meet with and to learn from the Tsimane’. I’ve been picking up some Tsimane’ phrases from the translators with whom we work, and can now have basic conversations with participants and make kids laugh, so that’s a lot of fun. A few children I met my first year greeted me by name when we went back this past year. That was a very special moment!

Translator Manuel Roca Moye (left) with Malinda McPherson and Josh McDermott in a fully loaded canoe. Photo: McDermott lab

What single scientific question do you hope to answer?

I’d be curious to figure out the overlaps and distinctions between how we perceive music versus speech, but I think one of the best aspects of science is that many of the important future questions haven’t been thought of yet!

Single neurons can encode distinct landmarks

The organization of many neurons wired together in a complex circuit gives the brain its ability to perform powerful calculations. Work from the Harnett lab recently showed that even single neurons can process more information than previously thought, representing distinct variables at the subcellular level during behavior.

McGovern Investigator Mark Harnett and postdoc Jakob Voigts conducted an extremely delicate and intricate imaging experiment on different parts of the same neuron in the mouse retinosplenial cortex during 2-D navigation. Their set up allowed 2-photon imaging of neuronal sub-compartments during free 2-D navigation with head rotation, the latter being important to follow neural activity during naturalistic, complex behavior.

Recording computation by subcompartments in neurons.

 

In the work, published recently in Neuron, the authors used Ca2+-imaging to show that the soma in a single neuron was consistently active when mice were at particular landmarks as they navigated in an arena. The dendrites (tree-like antennas that receive input from other neurons) of exactly the same neuron were robustly active independent of the soma at distinct positions and orientations in the arena. This strongly suggests that the dendrites encode distinct information compared to their parent soma, in this case spatial variables during navigation, laying the foundation for studying sub-cellular processes during complex behaviors.

 

Shrinking CRISPR tools

Before CRISPR gene-editing tools can be used to treat brain disorders, scientists must find safe ways to deliver the tools to the brain. One promising method involves harnessing viruses that are benign, and replacing non-essential genetic cargo with therapeutic CRISPR tools. But there is limited room for additional tools in a vector already stuffed with essential gear.

Squeezing all the tools that are needed to edit the genome into a single delivery vector is a challenge. Soumya Kannan is addressing this capacity problem in Feng Zhang’s lab with fellow graduate student Han Altae-Tran, by developing smaller CRISPR tools that can be more easily packaged into viral vectors for delivery. She is focused on RNA editors, members of the Cas13 family that can fix small mutations in RNA without making changes to the genome itself.

“The limitation is that RNA editors are large. At this point though, we know that editing works, we understand the mechanism by which it works, and there’s feasible packaging in AAV. We’re now trying to shrink systems such as RESCUE and REPAIR so that they fit into the packaging for delivery.”

One of many avenues the Zhang lab has taken to tool-finding in the past is to explore biodiversity for new versions of tools, and this is an approach that intrigues Soumya.

“Metagenomics projects are literally sequencing life from the Antarctic ice cores to hot sea vents. It fascinates me that the CRISPR tools of ancient organisms and those that live in extreme conditions.”

Researchers continue to search these troves of sequencing data for new tools.

 

Two CRISPR scientists on the future of gene editing

As part of our Ask the Brain series, Martin Wienisch and Jonathan Wilde of the Feng lab look into the crystal ball to predict the future of CRISPR tech.

_____

Where will CRISPR be in five years?

Jonathan: We’ll definitely have more efficient, more precise, and safer editing tools. An immediate impact on human health may be closer than we think through more nutritious and resilient crops. Also, I think we will have more viable tools available for repairing disease-causing mutations in the brain, which is something that the field is really lacking right now.

Martin: And we can use these technologies with new disease models to help us understand brain disorders such as Huntington’s disease.

Jonathan: There are also incredible tools being discovered in nature: exotic CRISPR systems from newly discovered bacteria and viruses. We could use these to attack disease-causing bacteria.

Martin: We would then be using CRISPR systems for the reason they evolved. Also improved gene drives, CRISPR-systems that can wipe out disease-carrying organisms such as mosquitoes, could impact human health in that time frame.

What will move gene therapy forward?

Martin: A breakthrough on delivery. That’s when therapy will exponentially move forward. Therapy will be tailored to different diseases and disorders, depending on relevant cell types or the location of mutations for example.

Jonathan: Also panning biodiversity even faster: we’ve only looked at one small part of the tree of life for tools. Sequencing and computational advances can help: a future where we collect and analyze genomes in the wild using portable sequencers and laptops can only quicken the pace of new discoveries.

_____

Do you have a question for The Brain? Ask it here.