RNA “ticker tape” records gene activity over time

As cells grow, divide, and respond to their environment,  their gene expression changes; one gene may be transcribed into more RNA at one time point and less at another time when it’s no longer needed. Now, researchers at the McGovern Institute, Harvard, and the Broad Institute of MIT and Harvard have developed a way to determine when specific RNA molecules are produced in cells.  The method, described today in Nature Biotechnology, allows scientists to more easily study how a cell’s gene expression fluctuates over time.

“Biology is very dynamic but most of the tools we use in biology are static; you get a fixed snapshot of what’s happening in a cell at a given moment,” said Fei Chen, a core institute member at the Broad, an assistant professor at Harvard University, and a co-senior author of the new work. “This will now allow us to record what’s happening over hours or days.”

To find out the level of RNA a cell is transcribing, researchers typically extract genetic material from the cell—destroying the cell in the process—and use RNA sequencing technology to determine which genes are being transcribed into RNA, and how much. Although researchers can sample cells at various times, they can’t easily measure gene expression at multiple time points.

To create a more precise timestamp, the team added strings of repetitive DNA bases to genes of interest in cultured human cells. These strings caused the cell to add repetitive regions of adenosine molecules—one of four building blocks of RNA — to the ends of RNA when the RNA was transcribed from these genes. The researchers also introduced an engineered version of an enzyme called adenosine deaminase acting on RNA (ADAR2cd), which slowly changed the adenosine molecules to a related molecule, inosine, at a predictable rate in the RNA. By measuring the ratio of inosines to adenosines in the timestamped section of any given RNA molecule, the researchers could elucidate when it was first produced, while keeping cells intact.

“It was pretty surprising to see how well this worked as a timestamp,” said Sam Rodriques, a co-first author of the new paper and former MIT graduate student who is now founding the Applied Biotechnology Laboratory at the Crick Institute in London. “And the more molecules you look at, the better your temporal resolution.”

Using their method, the researchers could estimate the age of a single timestamped RNA molecule to within 2.7 hours. But when they looked simultaneously at four RNA molecules, they could estimate the age of the molecules to within 1.5 hours. Looking at 200 molecules at once allowed the scientists to correctly sort RNA molecules into groups based on their age, or order them along a timeline with 86 percent accuracy.

“Extremely interesting biology, such as immune responses and development, occurs over a timescale of hours,” said co-first author of the paper Linlin Chen of the Broad. “Now we have the opportunity to better probe what’s happening on this timescale.”

The researchers found that the approach, with some small tweaks, worked well on various cell types — neurons, fibroblasts and embryonic kidney cells. They’re planning to now use the method to study how levels of gene activity related to learning and memory change in the hours after a neuron fires.

The current system allows researchers to record changes in gene expression over half a day. The team is now expanding the time range over which they can record gene activity, making the method more precise, and adding the ability to track several different genes at a time.

“Gene expression is constantly changing in response to the environment,” said co-senior author Edward Boyden of MIT, the McGovern Institute for Brain Research, and the Howard Hughes Medical Institute. “Tools like this will help us eavesdrop on how cells evolve over time, and help us pinpoint new targets for treating diseases.”

Support for the research was provided by the National Institutes of Health, the Schmidt Fellows Program at Broad Institute, the Burroughs Wellcome Fund, John Doerr, the Open Philanthropy Project, the HHMI-Simons Faculty Scholars Program, the U. S. Army Research Laboratory and the U. S. Army Research Office, the MIT Media Lab, Lisa Yang, the Hertz Graduate Fellowship and the National Science Foundation Graduate Research Fellowship Program.

Researchers ID crucial brain pathway involved in object recognition

MIT researchers have identified a brain pathway critical in enabling primates to effortlessly identify objects in their field of vision. The findings enrich existing models of the neural circuitry involved in visual perception and help to further unravel the computational code for solving object recognition in the primate brain.

Led by Kohitij Kar, a postdoctoral associate at the McGovern Institute for Brain Research and Department of Brain and Cognitive Sciences, the study looked at an area called the ventrolateral prefrontal cortex (vlPFC), which sends feedback signals to the inferior temporal (IT) cortex via a network of neurons. The main goal of this study was to test how the back and forth information processing of this circuitry, that is, this recurrent neural network, is essential to rapid object identification in primates.

The current study, published in Neuron and available today via open access, is a follow-up to prior work published by Kar and James DiCarlo, Peter de Florez Professor of Neuroscience, the head of MIT’s Department of Brain and Cognitive Sciences, and an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines.

Monkey versus machine

In 2019, Kar, DiCarlo, and colleagues identified that primates must use some recurrent circuits during rapid object recognition. Monkey subjects in that study were able to identify objects more accurately than engineered “feedforward” computational models, called deep convolutional neural networks, that lacked recurrent circuitry.

Interestingly, specific images for which models performed poorly compared to monkeys in object identification, also took longer to be solved in the monkeys’ brains — suggesting that the additional time might be due to recurrent processing in the brain. Based on the 2019 study, it remained unclear though exactly which recurrent circuits were responsible for the delayed information boost in the IT cortex. That’s where the current study picks up.

“In this new study, we wanted to find out: Where are these recurrent signals in IT coming from?” Kar said. “Which areas reciprocally connected to IT, are functionally the most critical part of this recurrent circuit?”

To determine this, researchers used a pharmacological agent to temporarily block the activity in parts of the vlPFC in macaques while they engaged in an object discrimination task. During these tasks, monkeys viewed images that contained an object, such as an apple, a car, or a dog; then, researchers used eye tracking to determine if the monkeys could correctly indicate what object they had previously viewed when given two object choices.

“We observed that if you use pharmacological agents to partially inactivate the vlPFC, then both the monkeys’ behavior and IT cortex activity deteriorates but more so for certain specific images. These images were the same ones we identified in the previous study — ones that were poorly solved by ‘feedforward’ models and took longer to be solved in the monkey’s IT cortex,” said Kar.

MIT researchers used an object recognition task (e.g., recognizing that there is a “bird” and not an “elephant” in the shown image) in studying the role of feedback from primate ventrolateral prefrontal cortex (vlPFC) to the inferior temporal (IT) cortex via a network of neurons. In primate brains, temporally blocking the vlPFC (green shaded area) disrupts the recurrent neural network comprising vlPFC and IT inducing specific deficits, implicating its role in rapid object identification. Image: Kohitij Kar, brain image adapted from SciDraw

“These results provide evidence that this recurrently connected network is critical for rapid object recognition, the behavior we’re studying. Now, we have a better understanding of how the full circuit is laid out, and what are the key underlying neural components of this behavior.”

The full study, entitled “Fast recurrent processing via ventrolateral prefrontal cortex is needed by the primate ventral stream for robust core visual object recognition,” will run in print January 6, 2021.

“This study demonstrates the importance of pre-frontal cortical circuits in automatically boosting object recognition performance in a very particular way,” DiCarlo said. “These results were obtained in nonhuman primates and thus are highly likely to also be relevant to human vision.”

The present study makes clear the integral role of the recurrent connections between the vlPFC and the primate ventral visual cortex during rapid object recognition. The results will be helpful to researchers designing future studies that aim to develop accurate models of the brain, and to researchers who seek to develop more human-like artificial intelligence.

New neuron type discovered only in primate brains

Neuropsychiatric illnesses like schizophrenia and autism are a complex interplay of brain chemicals, environment, and genetics that requires careful study to understand the root causes. Scientists have traditionally relied on samples taken from mice and non-human primates to study how these diseases develop. But the question has lingered: are the brains of these subjects similar enough to humans to yield useful insights?

Now work from the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research is pointing towards an answer. In a study published in Nature, researchers from the Broad’s Stanley Center for Psychiatric Research report several key differences in the brains of ferrets, mice, nonhuman primates, and humans, all focused on a type of neuron called interneurons. Most surprisingly, the team found a new type of interneuron only in primates, located in a part of the brain called the striatum, which is associated with Huntington’s disease and potentially schizophrenia.

The findings could help accelerate research into causes of and treatments for neuropsychiatric illnesses, by helping scientists choose the lab model that best mimics features of the human brain that may be involved in these diseases.

“The data from this work will inform the study of human brain disorders because it helps us think about which features of the human brain can be studied in mice, which features require higher organisms such as marmosets, and why mouse models often don’t reflect the effects of the corresponding mutations in human,” said Steven McCarroll, senior author of the study, director of genetics at the Stanley Center, and a professor of genetics at Harvard Medical School.

“Dysfunctions of interneurons have been strongly linked to several brain disorders including autism spectrum disorder and schizophrenia,” said Guoping Feng, co-author of the study, director of model systems and neurobiology at the Stanley Center, and professor of neuroscience at MIT’s McGovern Institute for Brain Research. “These data further demonstrate the unique importance of non-human primate models in understanding neurobiological mechanisms of brain disorders and in developing and testing therapeutic approaches.”

Enter the interneuron

Interneurons form key nodes within neural circuitry in the brain, and help regulate neuronal activity by releasing the neurotransmitter GABA, which inhibits the firing of other neurons.

Fenna Krienen, a postdoctoral fellow in the McCarroll Lab and first author on the Nature paper, and her colleagues wanted to track the natural history of interneurons.

“We wanted to gain an understanding of the evolutionary trajectory of the cell types that make up the brain,” said Krienen. “And then we went about acquiring samples from species that could inform this understanding of evolutionary divergence between humans and the models that so often stand in for humans in neuroscience studies.”

One of the tools the researchers used was Drop-seq, a high-throughput single nucleus RNA sequencing technique developed by McCarroll’s lab, to classify the roles and locations of more than 184,000 telencephalic interneurons in the brains of ferrets, humans, macaques, marmosets, and mice. Using tissue from frozen samples, the team isolated the nuclei of interneurons from the cortex, the hippocampus, and the striatum, and profiled the RNA from the cells.

The researchers thought that because interneurons are found in all vertebrates, the cells would be relatively static from species to species.

“But with these sensitive measurements and a lot of data from the various species, we got a different picture about how lively interneurons are, in terms of the ways that evolution has tweaked their programs or their populations from one species to the next,” said Krienen.

She and her collaborators identified four main differences in interneurons between the species they studied: the cells change their proportions across brain regions, alter the programs they use to link up with other neurons, and can migrate to different regions of the brain.

But most strikingly, the scientists discovered that primates have a novel interneuron not found in other species. The interneuron is located in the striatum—the brain structure responsible for cognition, reward, and coordinated movements that has existed as far back on the evolutionary tree as ancient primitive fish. The researchers were amazed to find the new neuron type made up a third of all interneurons in the striatum.

“Although we expected the big innovations in human and primate brains to be in the cerebral cortex, which we tend to associate with human intelligence, it was in fact in the venerable striatum that Fenna uncovered the most dramatic cellular innovation in the primate brain,” said McCarroll. “This cell type had never been discovered before, because mice have nothing like it.”

“The question of what provides the “human advantage” in cognitive abilities is one of the fundamental issues neurobiologists have endeavored to answer,” said Gordon Fishell, group leader at the Stanley Center, a professor of neurobiology at Harvard Medical School, and a collaborator on the study. “These findings turn on end the question of ‘how do we build better brains?’. It seems at least part of the answer stems from creating a new list of parts.”

A better understanding of how these inhibitory neurons vary between humans and lab models will provide researchers with new tools for investigating various brain disorders. Next, the researchers will build on this work to determine the specific functions of each type of interneuron.

“In studying neurodevelopmental disorders, you would like to be convinced that your model is an appropriate one for really complex social behaviors,” Krienen said. “And the major overarching theme of the study was that primates in general seem to be very similar to one another in all of those interneuron innovations.”

Support for this work was provided in part by the Broad Institute’s Stanley Center for Psychiatric Research and the NIH Brain Initiative, the Dean’s Innovation Award (Harvard Medical School), the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, the Poitras Center for Psychiatric Disorders Research at MIT, the McGovern Institute for Brain Research at MIT, and the National Institute of Neurological Disorders and Stroke.

20 Years of Discovery

 

McGovern Institute Director Robert Desimone.

Pat and Lore McGovern founded the McGovern Institute 20 years ago with a dual mission – to understand the brain, and to apply that knowledge to help the many people affected by brain disorders. Some of the amazing developments of the past 20 years, such as CRISPR, may seem entirely unexpected and “out of the blue.” But they were all built on a foundation of basic research spanning many years. With the incredible foundation we are building right now, I feel we are poised for many more “unexpected” discoveries in the years ahead.

I predict that in 20 years, we will have quantitative models of brain function that will not only explain how the brain gives rise to at least some aspects of our mind, but will also give us a new mechanistic understanding of brain disorders. This, in turn, will lead to new types of therapies, in what I imagine to be a post-pharmaceutical era of the future. I have no doubt that these same brain models will inspire new educational approaches for our children, and will be incorporated into whatever replaces my automobile, and iPhone, in 2040. I encourage you to read some other predictions from our faculty.

Our cutting-edge work depends not only on our stellar line up of faculty, but the more than 400 postdocs, graduate students, undergraduates, summer students, and staff who make up our community.

For this reason, I am particularly delighted to share with you McGovern’s rising stars — 20 young scientists from each of our labs — who represent the next generation of neuroscience.

And finally, we remain deeply indebted to our supporters for funding our research, including ongoing support from the Patrick J. McGovern Foundation. In recent years, more than 40% of our annual research funding has come from private individuals and foundations. This support enables critical seed funding for new research projects, the development of new technologies, our new research into autism and psychiatric disorders, and fellowships for young scientists just starting their careers. Our annual fund supporters have made possible more than 42 graduate fellowships, and you can read about some of these fellows on our website.

I hope that as you visit our website and read the pages of our special anniversary issue of Brain Scan, you will feel as optimistic as I do about our future.

Robert Desimone
Director, McGovern Institute
Doris and Don Berkey Professor of Neuroscience

Tool developed in Graybiel lab reveals new clues about Parkinson’s disease

As the brain processes information, electrical charges zip through its circuits and neurotransmitters pass molecular messages from cell to cell. Both forms of communication are vital, but because they are usually studied separately, little is known about how they work together to control our actions, regulate mood, and perform the other functions of a healthy brain.

Neuroscientists in Ann Graybiel’s laboratory at MIT’s McGovern Institute are taking a closer look at the relationship between these electrical and chemical signals. “Considering electrical signals side by side with chemical signals is really important to understand how the brain works,” says Helen Schwerdt, a postdoctoral researcher in Graybiel’s lab. Understanding that relationship is also crucial for developing better ways to diagnose and treat nervous system disorders and mental illness, she says, noting that the drugs used to treat these conditions typically aim to modulate the brain’s chemical signaling, yet studies of brain activity are more likely to focus on electrical signals, which are easier to measure.

Schwerdt and colleagues in Graybiel’s lab have developed new tools so that chemical and electrical signals can, for the first time, be measured simultaneously in the brains of primates. In a study published September 25, 2020, in Science Advances, they used those tools to reveal an unexpectedly complex relationship between two types of signals that are disrupted in patients with Parkinson’s disease—dopamine signaling and coordinated waves of electrical activity known as beta-band oscillations.

Complicated relationship

Graybiel’s team focused its attention on beta-band activity and dopamine signaling because studies of patients with Parkinson’s disease had suggested a straightforward inverse relationship between the two. The tremors, slowness of movement, and other symptoms associated with the disease develop and progress as the brain’s production of the neurotransmitter dopamine declines, and at the same time, beta-band oscillations surge to abnormal levels. Beta-band oscillations are normally observed in parts of the brain that control movement when a person is paying attention or planning to move. It’s not clear what they do or why they are disrupted in patients with Parkinson’s disease. But because patients’ symptoms tend to be worst when beta activity is high—and because beta activity can be measured in real time with sensors placed on the scalp or with a deep-brain stimulation device that has been implanted for treatment, researchers have been hopeful that it might be useful for monitoring the disease’s progression and patients’ response to treatment. In fact, clinical trials are already underway to explore the effectiveness of modulating deep-brain stimulation treatment based on beta activity.

When Schwerdt and colleagues examined these two types of signals in the brains of rhesus macaques, they discovered that the relationship between beta activity and dopamine is more complicated than previously thought.

Their new tools allowed them to simultaneously monitor both signals with extraordinary precision, targeting specific parts of the striatum—a region deep within the brain involved in controlling movement, where dopamine is particularly abundant—and taking measurements on the millisecond time scale to capture neurons’ rapid-fire communications.

They took these measurements as the monkeys performed a simple task, directing their gaze in a particular direction in anticipation of a reward. This allowed the researchers to track chemical and electrical signaling during the active, motivated movement of the animals’ eyes. They found that beta activity did increase as dopamine signaling declined—but only in certain parts of the striatum and during certain tasks. The reward value of a task, an animal’s past experiences, and the particular movement the animal performed all impacted the relationship between the two types of signals.

Multi-modal systems allow subsecond recording of chemical and electrical neural signals in the form of dopamine molecular concentrations and beta-band local field potentials (beta LFPs), respectively. Online measurements of dopamine and beta LFP (time-dependent traces displayed in box on right) were made in the primate striatum (caudate nucleus and putamen colored in green and purple, respectively, in the left brain image) as the animal was performing a task in which eye movements were made to cues displayed on the left (purple event marker line) and right (green event) of a screen in order to receive large or small amounts of food reward (red and blue events). Dopamine and beta LFP neural signals are centrally implicated in Parkinson’s disease and other brain disorders. Image: Helen Schwerdt

“What we expected is there in the overall view, but if we just look at a different level of resolution, all of a sudden the rules don’t hold,” says Graybiel, who is also an MIT Institute Professor. “It doesn’t destroy the likelihood that one would want to have a treatment related to this presumed opposite relationship, but it does say there’s something more here that we haven’t known about.”

The researchers say it’s important to investigate this more nuanced relationship between dopamine signaling and beta activity, and that understanding it more deeply might lead to better treatments for patients with Parkinson’s disease and related disorders. While they plan to continue to examine how the two types of signals relate to one another across different parts of the brain and under different behavioral conditions, they hope that other teams will also take advantage of the tools they have developed. “As these methods in neuroscience become more and more precise and dazzling in their power, we’re bound to discover new things,” says Graybiel.

This study was supported by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the National Science Foundation, Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Robert Buxton.

Robert Desimone to receive Goldman-Rakic Prize

Robert Desimone, the Doris and Don Berkey Professor in Brain and Cognitive Sciences at MIT, has been named a winner of this year’s Goldman-Rakic Prize for Outstanding Achievement in Cognitive Neuroscience Research. The award, given annually by the Brain and Behavior Research Foundation, is named in recognition of former Yale University neuroscientist Patricia Goldman-Rakic.

Desimone, who is also the director of the McGovern Institute for Brain Research, studies the brain mechanisms underlying attention, and most recently he has been studying animal models for brain disorders.

Desimone will deliver his prize lecture at the 2020 Annual International Mental Health Research Virtual Symposium on October 30, 2020.

New molecular therapeutics center established at MIT’s McGovern Institute

More than one million Americans are diagnosed with a chronic brain disorder each year, yet effective treatments for most complex brain disorders are inadequate or even nonexistent.

A major new research effort at MIT’s McGovern Institute aims to change how we treat brain disorders by developing innovative molecular tools that precisely target dysfunctional genetic, molecular, and circuit pathways.

The K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience was established at MIT through a $28 million gift from philanthropist Lisa Yang and MIT alumnus Hock Tan ’75. Yang is a former investment banker who has devoted much of her time to advocacy for individuals with disabilities and autism spectrum disorders. Tan is President and CEO of Broadcom, a global technology infrastructure company. This latest gift brings Yang and Tan’s total philanthropy to MIT to more than $72 million.

Lisa Yang (center) and MIT alumnus Hock Tan ’75 with their daughter Eva (far left) pictured at the opening of the Hock E. Tan and K. Lisa Yang Center for Autism Research in 2017. Photo: Justin Knight

“In the best MIT spirit, Lisa and Hock have always focused their generosity on insights that lead to real impact,” says MIT President L. Rafael Reif. “Scientifically, we stand at a moment when the tools and insights to make progress against major brain disorders are finally within reach. By accelerating the development of promising treatments, the new center opens the door to a hopeful new future for all those who suffer from these disorders and those who love them. I am deeply grateful to Lisa and Hock for making MIT the home of this pivotal research.”

Engineering with precision

Research at the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience will initially focus on three major lines of investigation: genetic engineering using CRISPR tools, delivery of genetic and molecular cargo across the blood-brain barrier, and the translation of basic research into the clinical setting. The center will serve as a hub for researchers with backgrounds ranging from biological engineering and genetics to computer science and medicine.

“Developing the next generation of molecular therapeutics demands collaboration among researchers with diverse backgrounds,” says Robert Desimone, McGovern Institute Director and Doris and Don Berkey Professor of Neuroscience at MIT. “I am confident that the multidisciplinary expertise convened by this center will revolutionize how we improve our health and fight disease in the coming decade. Although our initial focus will be on the brain and its relationship to the body, many of the new therapies could have other health applications.”

There are an estimated 19,000 to 22,000 genes in the human genome and a third of those genes are active in the brain–the highest proportion of genes expressed in any part of the body.

Variations in genetic code have been linked to many complex brain disorders, including depression and Parkinson’s. Emerging genetic technologies, such as the CRISPR gene editing platform pioneered by McGovern Investigator Feng Zhang, hold great potential in both targeting and fixing these errant genes. But the safe and effective delivery of this genetic cargo to the brain remains a challenge.

Researchers within the new Yang-Tan Center will improve and fine-tune CRISPR gene therapies and develop innovative ways of delivering gene therapy cargo into the brain and other organs. In addition, the center will leverage newly developed single cell analysis technologies that are revealing cellular targets for modulating brain functions with unprecedented precision, opening the door for noninvasive neuromodulation as well as the development of medicines. The center will also focus on developing novel engineering approaches to delivering small molecules and proteins from the bloodstream into the brain. Desimone will direct the center and some of the initial research initiatives will be led by Associate Professor of Materials Science and Engineering Polina Anikeeva; Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT; Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT; and Feng Zhang, James and Patricia Poitras Professor of Neuroscience at MIT.

Building a research hub

“My goal in creating this center is to cement the Cambridge and Boston region as the global epicenter of next-generation therapeutics research. The novel ideas I have seen undertaken at MIT’s McGovern Institute and Broad Institute of MIT and Harvard leave no doubt in my mind that major therapeutic breakthroughs for mental illness, neurodegenerative disease, autism and epilepsy are just around the corner,” says Yang.

Center funding will also be earmarked to create the Y. Eva Tan Fellows program, named for Tan and Yang’s daughter Eva, which will support fellowships for young neuroscientists and engineers eager to design revolutionary treatments for human diseases.

“We want to build a strong pipeline for tomorrow’s scientists and neuroengineers,” explains Hock Tan. “We depend on the next generation of bright young minds to help improve the lives of people suffering from chronic illnesses, and I can think of no better place to provide the very best education and training than MIT.”

The molecular therapeutics center is the second research center established by Yang and Tan at MIT. In 2017, they launched the Hock E. Tan and K. Lisa Yang Center for Autism Research, and, two years later, they created a sister center at Harvard Medical School, with the unique strengths of each institution converging toward a shared goal: understanding the basic biology of autism and how genetic and environmental influences converge to give rise to the condition, then translating those insights into novel treatment approaches.

All tools developed at the molecular therapeutics center will be shared globally with academic and clinical researchers with the goal of bringing one or more novel molecular tools to human clinical trials by 2025.

“We are hopeful that our centers, located in the heart of the Cambridge-Boston biotech ecosystem, will spur further innovation and fuel critical new insights to our understanding of health and disease,” says Yang.

 

How general anesthesia reduces pain

General anesthesia is medication that suppresses pain and renders patients unconscious during surgery, but whether pain suppression is simply a side effect of loss of consciousness has been unclear. Fan Wang and colleagues have now identified the circuits linked to pain suppression under anesthesia in mouse models, showing that this effect is separable from the unconscious state itself.

“Existing literature suggests that the brain may contain a switch that can turn off pain perception,” explains Fan Wang, a professor at Duke University and lead author of the study. “I had always wanted to find this switch, and it occurred to me that general anesthetics may activate this switch to produce analgesia.”

Wang, who will join the McGovern Institute in January 2021, set out to test this idea with her student, Thuy Hua, and postdoc, Bin Chen.

Pain suppressor

Loss of pain, or analgesia, is an important property of anesthetics that helps to make surgical and invasive medical procedures humane and bearable. In spite of their long use in the medical world, there is still very little understanding of how anesthetics work. It has generally been assumed that a side effect of loss of consciousness is analgesia, but several recent observations have brought this idea into question, and suggest that changes in consciousness might be separable from pain suppression.

A key clue that analgesia is separable from general anesthesia comes from the accounts of patients that regain consciousness during surgery. After surgery, these patients can recount conversations between staff or events that occurred in the operating room, despite not feeling any pain. In addition, some general anesthetics, such as ketamine, can be deployed at low concentrations for pain suppression without loss of consciousness.

Following up on these leads, Wang and colleagues set out to uncover which neural circuits might be involved in suppressing pain during exposure to general anesthetics. Using CANE, a procedure developed by Wang that can detect which neurons activate in response to an event, Wang discovered a new population of GABAergic neurons activated by general anesthetic in the mouse central amygdala.

These neurons become activated in response to different anesthetics, including ketamine, dexmedetomidine, and isoflurane. Using optogenetics to manipulate the activity state of these neurons, Wang and her lab found that they led to marked changes in behavioral responses to painful stimuli.

“The first time we used optogenetics to turn on these cells, a mouse that was in the middle of taking care of an injury simply stopped and started walked around with no sign of pain,” Wang explains.

Specifically, activating these cells blocks pain in multiple models and tests, whereas inhibiting these neurons rendered mice aversive to gentle touch — suggesting that they are involved in a newly uncovered central pain circuit.

The study has implications for both anesthesia and pain. It shows that general anesthetics have complex, multi-faceted effects and that the brain may contain a central pain suppression system.

“We want to figure out how diverse general anesthetics activate these neurons,” explains Wang. “That way we can find compounds that can specifically activate these pain-suppressing neurons without sedation. We’re now also testing whether placebo analgesia works by activating these same central neurons.”

The study also has implications for addiction as it may point to an alternative system for central pain suppression that could be a target of drugs that do not have the devastating side effects of opioids.

Fan Wang joins the McGovern Institute

The McGovern Institute is pleased to announce that Fan Wang, currently a Professor at Duke University, will be joining its team of investigators in 2021. Wang is well-known for her work on sensory perception, pain, and behavior. She takes a broad, and very practical approach to these questions, knowing that sensory perception has broad implications for biomedicine when it comes to pain management, addiction, anesthesia, and hypersensitivity.

“McGovern is a dream place for doing innovative and transformative neuroscience.” – Fan Wang

“I am so thrilled that Fan is coming to the McGovern Institute,” says Robert Desimone, director of the institute and the Doris and Don Berkey Professor of Neuroscience at MIT. “I’ve followed her work for a number of years, and she is making inroads into questions that are relevant to a number of societal problems, such as how we can turn off the perception of chronic pain.”

Wang brings with her a range of techniques developed in her lab, including CANE, which precisely highlights neurons that become activated in response to a stimulus. CANE is highlighting new neuronal subtypes in long-studied brain regions such as the amygdala, and recently elucidated previously undefined neurons in the lateral parabrachial nucleus involved in pain processing.

“I am so excited to join the McGovern Institute,” says Wang. “It is a dream place for doing innovative and transformative neuroscience. McGovern researchers are known for using the most cutting-edge, multi-disciplinary technologies to understand how the brain works. I can’t wait to join the team.”

Wang earned her PhD in 1998 with Richard Axel at Columbia University, subsequently conducting postdoctoral research at Stanford University with Mark Tessier-Lavigne. Wang joined Duke University as a Professor in the Department of Neurobiology in 2003, and was later appointed the Morris N. Broad Distinguished Professor of Neurobiology at Duke University School of Medicine. Wang will join the McGovern Institute as an investigator in January 2021.

COMMANDing drug delivery

While we are starting to get a handle on drugs and therapeutics that might to help alleviate brain disorders, efficient delivery remains a roadblock to tackling these devastating diseases. Research from the Graybiel, Cima, and Langer labs now uses a computational approach, one that accounts for the irregular shape of the target brain region, to deliver drugs effectively and specifically.

“Identifying therapeutic molecules that can treat neural disorders is just the first step,” says McGovern Investigator Ann Graybiel.

“There is still a formidable challenge when it comes to precisely delivering the therapeutic to the cells most affected in the disorder,” explains Graybiel, an MIT Institute Professor and a senior author on the paper. “Because the brain is so structurally complex, and subregions are irregular in shape, new delivery approaches are urgently needed.”

Fine targeting

Brain disorders often arise from dysfunction in specific regions. Parkinson’s disease, for example, arise from loss of neurons in a specific forebrain region, the striatum. Targeting such structures is a major therapeutic goal, and demands both overcoming the blood brain barrier, while also being specific to the structures affected by the disorder.

Such targeted therapy can potentially be achieved using intracerebral catheters. While this is a more specific form of delivery compared to systemic administration of a drug through the bloodstream, many brain regions are irregular in shape. This means that delivery throughout a specific brain region using a single catheter, while also limiting the spread of a given drug beyond the targeted area, is difficult. Indeed, intracerebral delivery of promising therapeutics has not led to the desired long-term alleviation of disorders.

“Accurate delivery of drugs to reach these targets is really important to ensure optimal efficacy and avoid off-target adverse effects. Our new system, called COMMAND, determines how best to dose targets,” says Michael Cima, senior author on the study and the David H. Koch Professor of Engineering in the Department of Materials Science and Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research.

3D renderings of simulated multi-bolus delivery to various brain structures (striatum, amygdala, substantia nigra, and hippocampus) with one to four boluses.

COMMAND response

In the case of Parkinson’s disease, implants are available that limit symptoms, but these are only effective in a subset of patients. There are, however, a number of promising potential therapeutic treatments, such as GDNF administration, where long-term, precise delivery is needed to move the therapy forward.

The Graybiel, Cima, and Langer labs developed COMMAND (computational mapping algorithms for neural drug delivery) that helps to target a drug to a specific brain region at multiple sites (multi-bolus delivery).

“Many clinical trials are believed to have failed due to poor drug distribution following intracerebral injection,” explained Khalil Ramadi, PhD ’19, one of the lead researchers on the paper, and a postdoctoral fellow at the Koch and McGovern Institute. “We rationalized that both research experiments and clinical therapies would benefit from computationally optimized infusion, to enable greater consistency across groups and studies, as well as more efficacious therapeutic delivery.”

The COMMAND system finds balance between the twin challenges of drug delivery by maximizing on-target and minimizing off-target delivery. COMMAND is essentially an algorithm that minimizes an error that reflects leakage beyond the bounds of a specific target area, in this case the striatum. A second error is also minimized, and this encapsulates the need to target across this irregularly shaped brain region. The strategy to overcome this is to deliver multiple “boluses” to different areas of the striatum to target this region precisely, yet completely.

“COMMAND applies a simple principle when determining where to place the drug: Maximize the amount of drug falling within the target brain structure and minimize tissues exposed beyond the target region,” explains Ashvin Bashyam, PhD ’19, co-lead author and a former graduate student with Michael Cima at MIT. “This balance is specified based drug properties such as minimum effective therapeutic concentration, toxicity, and diffusivity within brain tissue.”

The number of drug sites applied is kept as low as possible, keeping surgery simple while still providing enough flexibility to cover the target region. In computational simulations, the researchers were able to deliver drugs to compact brain structures, such as the striatum and amygdala, but also broader and more irregular regions, such as hippocampus.

To examine the spatiotemporal dynamics of actual delivery, the researchers used positron emission tomography (PET) and a ‘labeled’ solution, Cu-64, that allowed them to image and follow an infused bolus after delivery with a microprobe. Using this system, the researchers successfully used PET to validate the accuracy of multi-bolus delivery to the rat striatum and its coverage as guided by COMMAND.

“We anticipate that COMMAND can improve researchers’ ability to precisely target brain structures to better understand their function, and become a platform to standardize methods across neuroscience experiments,” explains Graybiel. “Beyond the lab, we hope COMMAND will lay the foundation to help bring multifocal, chronic drug delivery to patients.”