Self-assembling proteins can store cellular “memories”

As cells perform their everyday functions, they turn on a variety of genes and cellular pathways. MIT engineers have now coaxed cells to inscribe the history of these events in a long protein chain that can be imaged using a light microscope.

Cells programmed to produce these chains continuously add building blocks that encode particular cellular events. Later, the ordered protein chains can be labeled with fluorescent molecules and read under a microscope, allowing researchers to reconstruct the timing of the events.

This technique could help shed light on the steps that underlie processes such as memory formation, response to drug treatment, and gene expression.

“There are a lot of changes that happen at organ or body scale, over hours to weeks, which cannot be tracked over time,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

If the technique could be extended to work over longer time periods, it could also be used to study processes such as aging and disease progression, the researchers say.

Boyden is the senior author of the study, which appears today in Nature Biotechnology. Changyang Linghu, a former J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, who is now an assistant professor at the University of Michigan, is the lead author of the paper.

Cellular history

Biological systems such as organs contain many different kinds of cells, all of which have distinctive functions. One way to study these functions is to image proteins, RNA, or other molecules inside the cells, which provide hints to what the cells are doing. However, most methods for doing this offer only a glimpse of a single moment in time, or don’t work well with very large populations of cells.

“Biological systems are often composed of a large number of different types of cells. For example, the human brain has 86 billion cells,” Linghu says. “To understand those kinds of biological systems, we need to observe physiological events over time in these large cell populations.”

To achieve that, the research team came up with the idea of recording cellular events as a series of protein subunits that are continuously added to a chain. To create their chains, the researchers used engineered protein subunits, not normally found in living cells, that can self-assemble into long filaments.

The researchers designed a genetically encoded system in which one of these subunits is continuously produced inside cells, while the other is generated only when a specific event occurs. Each subunit also contains a very short peptide called an epitope tag — in this case, the researchers chose tags called HA and V5. Each of these tags can bind to a different fluorescent antibody, making it easy to visualize the tags later on and determine the sequence of the protein subunits.

For this study, the researchers made production of the V5-containing subunit contingent on the activation of a gene called c-fos, which is involved in encoding new memories. HA-tagged subunits make up most of the chain, but whenever the V5 tag shows up in the chain, that means that c-fos was activated during that time.

“We’re hoping to use this kind of protein self-assembly to record activity in every single cell,” Linghu says. “It’s not only a snapshot in time, but also records past history, just like how tree rings can permanently store information over time as the wood grows.”

Recording events

In this study, the researchers first used their system to record activation of c-fos in neurons growing in a lab dish. The c-fos gene was activated by chemically induced activation of the neurons, which caused the V5 subunit to be added to the protein chain.

To explore whether this approach could work in the brains of animals, the researchers programmed brain cells of mice to generate protein chains that would reveal when the animals were exposed to a particular drug. Later, the researchers were able to detect that exposure by preserving the tissue and analyzing it with a light microscope.

The researchers designed their system to be modular, so that different epitope tags can be swapped in, or different types of cellular events can be detected, including, in principle, cell division or activation of enzymes called protein kinases, which help control many cellular pathways.

The researchers also hope to extend the recording period that they can achieve. In this study, they recorded events for several days before imaging the tissue. There is a tradeoff between the amount of time that can be recorded and the time resolution, or frequency of event recording, because the length of the protein chain is limited by the size of the cell.

“The total amount of information it could store is fixed, but we could in principle slow down or increase the speed of the growth of the chain,” Linghu says. “If we want to record for a longer time, we could slow down the synthesis so that it will reach the size of the cell within, let’s say two weeks. In that way we could record longer, but with less time resolution.”

The researchers are also working on engineering the system so that it can record multiple types of events in the same chain, by increasing the number of different subunits that can be incorporated.

The research was funded by the Hock E. Tan and K. Lisa Yang Center for Autism Research, John Doerr, the National Institutes of Health, the National Science Foundation, the U.S. Army Research Office, and the Howard Hughes Medical Institute.

Season’s Greetings from the McGovern Institute

This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.

Universal language network

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages. Photo: Caitlin Cunningham

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.

To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.

The many languages of McGovern

English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.

Silent synapses are abundant in the adult brain

MIT neuroscientists have discovered that the adult brain contains millions of “silent synapses” — immature connections between neurons that remain inactive until they’re recruited to help form new memories.

Until now, it was believed that silent synapses were present only during early development, when they help the brain learn the new information that it’s exposed to early in life. However, the new MIT study revealed that in adult mice, about 30 percent of all synapses in the brain’s cortex are silent.

The existence of these silent synapses may help to explain how the adult brain is able to continually form new memories and learn new things without having to modify existing conventional synapses, the researchers say.

“These silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened. This lets the brain create new memories without overwriting the important memories stored in mature synapses, which are harder to change,” says Dimitra Vardalaki, an MIT graduate student and the lead author of the new study.

Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in Nature. Kwanghun Chung, an associate professor of chemical engineering at MIT, is also an author.

A surprising discovery

When scientists first discovered silent synapses decades ago, they were seen primarily in the brains of young mice and other animals. During early development, these synapses are believed to help the brain acquire the massive amounts of information that babies need to learn about their environment and how to interact with it. In mice, these synapses were believed to disappear by about 12 days of age (equivalent to the first months of human life).

However, some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories. Evidence for this has been seen in animal models of addiction, which is thought to be largely a disorder of aberrant learning.

Theoretical work in the field from Stefano Fusi and Larry Abbott of Columbia University has also proposed that neurons must display a wide range of different plasticity mechanisms to explain how brains can both efficiently learn new things and retain them in long-term memory. In this scenario, some synapses must be established or modified easily, to form the new memories, while others must remain much more stable, to preserve long-term memories.

In the new study, the MIT team did not set out specifically to look for silent synapses. Instead, they were following up on an intriguing finding from a previous study in Harnett’s lab. In that paper, the researchers showed that within a single neuron, dendrites — antenna-like extensions that protrude from neurons — can process synaptic input in different ways, depending on their location.

As part of that study, the researchers tried to measure neurotransmitter receptors in different dendritic branches, to see if that would help to account for the differences in their behavior. To do that, they used a technique called eMAP (epitope-preserving Magnified Analysis of the Proteome), developed by Chung. Using this technique, researchers can physically expand a tissue sample and then label specific proteins in the sample, making it possible to obtain super-high-resolution images.

The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere.

While they were doing that imaging, they made a surprising discovery. “The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere,” Harnett says.

Filopodia, thin membrane protrusions that extend from dendrites, have been seen before, but neuroscientists didn’t know exactly what they do. That’s partly because filopodia are so tiny that they are difficult to see using traditional imaging techniques.

After making this observation, the MIT team set out to try to find filopodia in other parts of the adult brain, using the eMAP technique. To their surprise, they found filopodia in the mouse visual cortex and other parts of the brain, at a level 10 times higher than previously seen. They also found that filopodia had neurotransmitter receptors called NMDA receptors, but no AMPA receptors.

A typical active synapse has both of these types of receptors, which bind the neurotransmitter glutamate. NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons. Thus, when AMPA receptors are not present, synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”

Unsilencing synapses

To investigate whether these filopodia might be silent synapses, the researchers used a modified version of an experimental technique known as patch clamping. This allowed them to monitor the electrical activity generated at individual filopodia as they tried to stimulate them by mimicking the release of the neurotransmitter glutamate from a neighboring neuron.

Using this technique, the researchers found that glutamate would not generate any electrical signal in the filopodium receiving the input, unless the NMDA receptors were experimentally unblocked. This offers strong support for the theory the filopodia represent silent synapses within the brain, the researchers say.

The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron. This combined stimulation leads to accumulation of AMPA receptors in the silent synapse, allowing it to form a strong connection with the nearby axon that is releasing glutamate.

The researchers found that converting silent synapses into active synapses was much easier than altering mature synapses.

“If you start with an already functional synapse, that plasticity protocol doesn’t work,” Harnett says. “The synapses in the adult brain have a much higher threshold, presumably because you want those memories to be pretty resilient. You don’t want them constantly being overwritten. Filopodia, on the other hand, can be captured to form new memories.”

“Flexible and robust”

The findings offer support for the theory proposed by Abbott and Fusi that the adult brain includes highly plastic synapses that can be recruited to form new memories, the researchers say.

“This paper is, as far as I know, the first real evidence that this is how it actually works in a mammalian brain,” Harnett says. “Filopodia allow a memory system to be both flexible and robust. You need flexibility to acquire new information, but you also need stability to retain the important information.”

The researchers are now looking for evidence of these silent synapses in human brain tissue. They also hope to study whether the number or function of these synapses is affected by factors such as aging or neurodegenerative disease.

“It’s entirely possible that by changing the amount of flexibility you’ve got in a memory system, it could become much harder to change your behaviors and habits or incorporate new information,” Harnett says. “You could also imagine finding some of the molecular players that are involved in filopodia and trying to manipulate some of those things to try to restore flexible memory as we age.”

The research was funded by the Boehringer Ingelheim Fonds, the National Institutes of Health, the James W. and Patricia T. Poitras Fund at MIT, a Klingenstein-Simons Fellowship, and Vallee Foundation Scholarship, and a McKnight Scholarship.

How touch dampens the brain’s response to painful stimuli

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

When we press our temples to soothe an aching head or rub an elbow after an unexpected blow, it often brings some relief. It is believed that pain-responsive cells in the brain quiet down when these neurons also receive touch inputs, say scientists at MIT’s McGovern Institute, who for the first time have watched this phenomenon play out in the brains of mice.

The team’s discovery, reported November 16, 2022, in the journal Science Advances, offers researchers a deeper understanding of the complicated relationship between pain and touch and could offer some insights into chronic pain in humans. “We’re interested in this because it’s a common human experience,” says McGovern Investigator Fan Wang. “When some part of your body hurts, you rub it, right? We know touch can alleviate pain in this way.” But, she says, the phenomenon has been very difficult for neuroscientists to study.

Modeling pain relief

Touch-mediated pain relief may begin in the spinal cord, where prior studies have found pain-responsive neurons whose signals are dampened in response to touch. But there have been hints that the brain was involved too. Wang says this aspect of the response has been largely unexplored, because it can be hard to monitor the brain’s response to painful stimuli amidst all the other neural activity happening there—particularly when an animal moves.

So while her team knew that mice respond to a potentially painful stimulus on the cheek by wiping their faces with their paws, they couldn’t follow the specific pain response in the animals’ brains to see if that rubbing helped settle it down. “If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal,” Wang explains.

She and her colleagues have found a way around this obstacle. Instead of studying the effects of face-rubbing, they have focused their attention on a subtler form of touch: the gentle vibrations produced by the movement of the animals’ whiskers. Mice use their whiskers to explore, moving them back and forth in a rhythmic motion known as whisking to feel out their environment. This motion activates touch receptors in the face and sends information to the brain in the form of vibrotactile signals. The human brain receives the same kind of touch signals when a person shakes their hand as they pull it back from a painfully hot pan—another way we seek touch-mediate pain relief.

If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal, says Wang.

Wang and her colleagues found that this whisker movement alters the way mice respond to bothersome heat or a poke on the face—both of which usually lead to face rubbing. “When the unpleasant stimuli were applied in the presence of their self-generated vibrotactile whisking…they respond much less,” she says. Sometimes, she says, whisking animals entirely ignore these painful stimuli.

In the brain’s somatosensory cortex, where touch and pain signals are processed, the team found signaling changes that seem to underlie this effect. “The cells that preferentially respond to heat and poking are less frequently activated when the mice are whisking,” Wang says. “They’re less likely to show responses to painful stimuli.” Even when whisking animals did rub their faces in response to painful stimuli, the team found that neurons in the brain took more time to adopt the firing patterns associated with that rubbing movement. “When there is a pain stimulation, usually the trajectory the population dynamics quickly moved to wiping. But if you already have whisking, that takes much longer,” Wang says.

Wang notes that even in the fraction of a second before provoked mice begin rubbing their faces, when the animals are relatively still, it can be difficult to sort out which brain signals are related to perceiving heat and poking and which are involved in whisker movement. Her team developed computational tools to disentangle these, and are hoping other neuroscientists will use the new algorithms to make sense of their own data.

Whisking’s effects on pain signaling seem to depend on dedicated touch-processing circuitry that sends tactile information to the somatosensory cortex from a brain region called the ventral posterior thalamus. When the researchers blocked that pathway, whisking no longer dampened the animals’ response to painful stimuli. Now, Wang says, she and her team are eager to learn how this circuitry works with other parts of the brain to modulate the perception and response to painful stimuli.

Wang says the new findings might shed light on a condition called thalamic pain syndrome, a chronic pain disorder that can develop in patients after a stroke that affects the brain’s thalamus. “Such strokes may impair the functions of thalamic circuits that normally relay pure touch signals and dampen painful signals to the cortex,” she says.

RNA-sensing system controls protein expression in cells based on specific cell states

Researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT have developed a system that can detect a particular RNA sequence in live cells and produce a protein of interest in response. Using the technology, the team showed how they could identify specific cell types, detect and measure changes in the expression of individual genes, track transcriptional states, and control the production of proteins encoded by synthetic mRNA.

The platform, called Reprogrammable ADAR Sensors, or RADARS, even allowed the team to target and kill a specific cell type. The team said RADARS could one day help researchers detect and selectively kill tumor cells, or edit the genome in specific cells. The study appears today in Nature Biotechnology and was led by co-first authors Kaiyi Jiang (MIT), Jeremy Koob (Broad), Xi Chen (Broad), Rohan Krajeski (MIT), and Yifan Zhang (Broad).

“One of the revolutions in genomics has been the ability to sequence the transcriptomes of cells,” said Fei Chen, a core institute member at the Broad, Merkin Fellow, assistant professor at Harvard University, and co-corresponding author on the study. “That has really allowed us to learn about cell types and states. But, often, we haven’t been able to manipulate those cells specifically. RADARS is a big step in that direction.”

“Right now, the tools that we have to leverage cell markers are hard to develop and engineer,” added Omar Abudayyeh, a McGovern Institute Fellow and co-corresponding author on the study. “We really wanted to make a programmable way of sensing and responding to a cell state.”

Jonathan Gootenberg, who is also a McGovern Institute Fellow and co-corresponding author, says that their team was eager to build a tool to take advantage of all the data provided by single-cell RNA sequencing, which has revealed a vast array of cell types and cell states in the body.

“We wanted to ask how we could manipulate cellular identities in a way that was as easy as editing the genome with CRISPR,” he said. “And we’re excited to see what the field does with it.” 

Omar Abudayyeh, Jonathan Gootenberg and Fei Chen at the Broad Institute
Study authors (from left to right) Omar Abudayyeh, Jonathan Gootenberg, and Fei Chen. Photo: Namrita Sengupta

Repurposing RNA editing

The RADARS platform generates a desired protein when it detects a specific RNA by taking advantage of RNA editing that occurs naturally in cells.

The system consists of an RNA containing two components: a guide region, which binds to the target RNA sequence that scientists want to sense in cells, and a payload region, which encodes the protein of interest, such as a fluorescent signal or a cell-killing enzyme. When the guide RNA binds to the target RNA, this generates a short double-stranded RNA sequence containing a mismatch between two bases in the sequence — adenosine (A) and cytosine (C). This mismatch attracts a naturally occurring family of RNA-editing proteins called adenosine deaminases acting on RNA (ADARs).

In RADARS, the A-C mismatch appears within a “stop signal” in the guide RNA, which prevents the production of the desired payload protein. The ADARs edit and inactivate the stop signal, allowing for the translation of that protein. The order of these molecular events is key to RADARS’s function as a sensor; the protein of interest is produced only after the guide RNA binds to the target RNA and the ADARs disable the stop signal.

The team tested RADARS in different cell types and with different target sequences and protein products. They found that RADARS distinguished between kidney, uterine, and liver cells, and could produce different fluorescent signals as well as a caspase, an enzyme that kills cells. RADARS also measured gene expression over a large dynamic range, demonstrating their utility as sensors.

Most systems successfully detected target sequences using the cell’s native ADAR proteins, but the team found that supplementing the cells with additional ADAR proteins increased the strength of the signal. Abudayyeh says both of these cases are potentially useful; taking advantage of the cell’s native editing proteins would minimize the chance of off-target editing in therapeutic applications, but supplementing them could help produce stronger effects when RADARS are used as a research tool in the lab.

On the radar

Abudayyeh, Chen, and Gootenberg say that because both the guide RNA and payload RNA are modifiable, others can easily redesign RADARS to target different cell types and produce different signals or payloads. They also engineered more complex RADARS, in which cells produced a protein if they sensed two RNA sequences and another if they sensed either one RNA or another. The team adds that similar RADARS could help scientists detect more than one cell type at the same time, as well as complex cell states that can’t be defined by a single RNA transcript.

Ultimately, the researchers hope to develop a set of design rules so that others can more easily develop RADARS for their own experiments. They suggest other scientists could use RADARS to manipulate immune cell states, track neuronal activity in response to stimuli, or deliver therapeutic mRNA to specific tissues.

“We think this is a really interesting paradigm for controlling gene expression,” said Chen. “We can’t even anticipate what the best applications will be. That really comes from the combination of people with interesting biology and the tools you develop.”

This work was supported by the The McGovern Institute Neurotechnology (MINT) program, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the G. Harold & Leila Y. Mathers Charitable Foundation, Massachusetts Institute of Technology, Impetus Grants, the Cystic Fibrosis Foundation, Google Ventures, FastGrants, the McGovern Institute, National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Foundation, the Harvard Stem Cell Institute, and the Merkin Institute.

A “golden era” to study the brain

As an undergraduate, Mitch Murdock was a rare science-humanities double major, specializing in both English and molecular, cellular, and developmental biology at Yale University. Today, as a doctoral student in the MIT Department of Brain and Cognitive Sciences, he sees obvious ways that his English education expanded his horizons as a neuroscientist.

“One of my favorite parts of English was trying to explore interiority, and how people have really complicated experiences inside their heads,” Murdock explains. “I was excited about trying to bridge that gap between internal experiences of the world and that actual biological substrate of the brain.”

Though he can see those connections now, it wasn’t until after Yale that Murdock became interested in brain sciences. As an undergraduate, he was in a traditional molecular biology lab. He even planned to stay there after graduation as a research technician; fortunately, though, he says his advisor Ron Breaker encouraged him to explore the field. That’s how Murdock ended up in a new lab run by Conor Liston, an associate professor at Weill Cornell Medicine, who studies how factors such as stress and sleep regulate the modeling of brain circuits.

It was in Liston’s lab that Murdock was first exposed to neuroscience and began to see the brain as the biological basis of the philosophical questions about experience and emotion that interested him. “It was really in his lab where I thought, ‘Wow, this is so cool. I have to do a PhD studying neuroscience,’” Murdock laughs.

During his time as a research technician, Murdock examined the impact of chronic stress on brain activity in mice. Specifically, he was interested in ketamine, a fast-acting antidepressant prone to being abused, with the hope that better understanding how ketamine works will help scientists find safer alternatives. He focused on dendritic spines, small organelles attached to neurons that help transmit electrical signals between neurons and provide the physical substrate for memory storage. His findings, Murdock explains, suggested that ketamine works by recovering dendritic spines that can be lost after periods of chronic stress.

After three years at Weill Cornell, Murdock decided to pursue doctoral studies in neuroscience, hoping to continue some of the work he started with Liston. He chose MIT because of the research being done on dendritic spines in the lab of Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in The Picower Institute for Learning and Memory.

Once again, though, the opportunity to explore a wider set of interests fortuitously led Murdock to a new passion. During lab rotations at the beginning of his PhD program, Murdock spent time shadowing a physician at Massachusetts General Hospital who was working with Alzheimer’s disease patients.

“Everyone knows that Alzheimer’s doesn’t have a cure. But I realized that, really, if you have Alzheimer’s disease, there’s very little that can be done,” he says. “That was a big wake-up call for me.”

After that experience, Murdock strategically planned his remaining lab rotations, eventually settling into the lab of Li-Huei Tsai, the Picower Professor of Neuroscience and the director of the Picower Institute. For the past five years, Murdock has worked with Tsai on various strands of Alzheimer’s research.

In one project, for example, members of the Tsai lab have shown how certain kinds of non-invasive light and sound stimulation induce brain activity that can improve memory loss in mouse models of Alzheimer’s. Scientists think that, during sleep, small movements in blood vessels drive spinal fluid into the brain, which, in turn, flushes out toxic metabolic waste. Murdock’s research suggests that certain kinds of stimulation might drive a similar process, flushing out waste that can exacerbate memory loss.

Much of his work is focused on the activity of single cells in the brain. Are certain neurons or types of neurons genetically predisposed to degenerate, or do they break down randomly? Why do certain subtypes of cells appear to be dysfunctional earlier on in the course of Alzheimer’s disease? How do changes in blood flow in vascular cells affect degeneration? All of these questions, Murdock believes, will help scientists better understand the causes of Alzheimer’s, which will translate eventually into developing cures and therapies.

To answer these questions, Murdock relies on new single-cell sequencing techniques that he says have changed the way we think about the brain. “This has been a big advance for the field, because we know there are a lot of different cell types in the brain, and we think that they might contribute differentially to Alzheimer’s disease risk,” says Murdock. “We can’t think of the brain as only about neurons.”

Murdock says that that kind of “big-picture” approach — thinking about the brain as a compilation of many different cell types that are all interacting — is the central tenet of his research. To look at the brain in the kind of detail that approach requires, Murdock works with Ed Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research. Working with Boyden has allowed Murdock to use new technologies such as expansion microscopy and genetically encoded sensors to aid his research.

That kind of new technology, he adds, has helped blow the field wide open. “This is such a cool time to be a neuroscientist because the tools available now make this a golden era to study the brain.” That rapid intellectual expansion applies to the study of Alzheimer’s as well, including newly understood connections between the immune system and Alzheimer’s — an area in which Murdock says he hopes to continue after graduation.

Right now, though, Murdock is focused on a review paper synthesizing some of the latest research. Given the mountains of new Alzheimer’s work coming out each year, he admits that synthesizing all the data is a bit “crazy,” but he couldn’t be happier to be in the middle of it. “There’s just so much that we are learning about the brain from these new techniques, and it’s just so exciting.”

Personal pursuits

This story originally appeared in the Fall 2022 issue of BrainScan.

***

Many neuroscientists were drawn to their careers out of curiosity and wonder. Their deep desire to understand how the brain works drew them into the lab and keeps them coming back, digging deeper and exploring more each day. But for some, the work is more personal.

Several McGovern faculty say they entered their field because someone in their lives was dealing with a brain disorder that they wanted to better understand. They are committed to unraveling the basic biology of those conditions, knowing that knowledge is essential to guide the development of better treatments.

The distance from basic research to clinical progress is shortening, and many young neuroscientists hope not just to deepen scientific understanding of the brain, but to have direct impact on the lives of patients. Some want to know why people they love are suffering from neurological disorders or mental illness; others seek to understand the ways in which their own brains work differently than others. But above all, they want better treatments for people affected by such disorders.

Seeking answers

That’s true for Kian Caplan, a graduate student in MIT’s Department of Brain and Cognitive Sciences who was diagnosed with Tourette syndrome around age 13. At the time, learning that the repetitive, uncontrollable movements and vocal tics he had been making for most of his life were caused by a neurological disorder was something of a relief. But it didn’t take long for Caplan to realize his diagnosis came with few answers.

Graduate student Kian Caplan studies the brain circuits associated with Tourette syndrome and obsessive-compulsive disorder in Guoping Feng and Fan Wang’s labs at the McGovern Institute. Photo: Steph Stevens

Tourette syndrome has been estimated to occur in about six of every 1,000 children, but its neurobiology remains poorly understood.

“The doctors couldn’t really explain why I can’t control the movements and sounds I make,” he says. “They couldn’t really explain why my symptoms wax and wane, or why the tics I have aren’t always the same.”

That lack of understanding is not just frustrating for curious kids like Caplan. It means that researchers have been unable to develop treatments that target the root cause of Tourette syndrome. Drugs that dampen signaling in parts of the brain that control movement can help suppress tics, but not without significant side effects. Caplan has tried those drugs. For him, he says, “they’re not worth the suppression.”

Advised by Fan Wang and McGovern Associate Director Guoping Feng, Caplan is looking for answers. A mouse model of obsessive-compulsive disorder developed in Feng’s lab was recently found to exhibit repetitive movements similar to those of people with Tourette syndrome, and Caplan is working to characterize those tic-like movements. He will use the mouse model to examine the brain circuits underlying the two conditions, which often co-occur in people. Broadly, researchers think Tourette syndrome arises due to dysregulation of cortico-striatal-thalamo-cortical circuits, which connect distant parts of the brain to control movement. Caplan and Wang suspect that the brainstem — a structure found where the brain connects to the spinal cord, known for organizing motor movement into different modules — is probably involved, too.

Wang’s research group studies the brainstem’s role in movement, but she says that like most researchers, she hadn’t considered its role in Tourette syndrome until Caplan joined her lab. That’s one reason Caplan, who has long been a mentor and advocate for students with neurodevelopmental disorders, thinks neuroscience needs more neurodiversity.

“I think we need more representation in basic science research by the people who actually live with those conditions,” he says. Their experiences can lead to insights that may be inaccessible to others, he says, but significant barriers in academia often prevent this kind of representation. Caplan wants to see institutions make systemic changes to ensure that neurodiverse and otherwise minority individuals are able to thrive in academia. “I’m not an exception,” he says, “there should be more people like me here, but the present system makes that incredibly difficult.”

Overcoming adversity

Like Caplan, Lace Riggs faced significant challenges in her pursuit to study the brain. She grew up in Southern California’s Inland Empire, where issues of social disparity, chronic stress, drug addiction, and mental illness were a part of everyday life.

Postdoctoral fellow Lace Riggs studies the origins of neurodevelopmental conditions in Guoping Feng’s lab at the McGovern Institute. Photo: Lace Riggs

“Living in severe poverty and relying on government assistance without access to adequate education and resources led everyone I know and love to suffer tremendously, myself included,” says Riggs, a postdoctoral fellow in the Feng lab.

“There are not a lot of people like me who make it to this stage,” says Riggs, who has lost friends and family members to addiction, mental illness, and suicide. “There’s a reason for that,” she adds. “It’s really, really difficult to get through the educational system and to overcome socioeconomic barriers.”

Today, Riggs is investigating the origins of neurodevelopmental conditions, hoping to pave the way to better treatments for brain disorders by uncovering the molecular changes that alter the structure and function of neural circuits.

Riggs says that the adversities she faced early in life offered valuable insights in the pursuit of these goals. She first became interested in the brain because she wanted to understand how our experiences have a lasting impact on who we are — including in ways that leave people vulnerable to psychiatric problems.

“While the need for more effective treatments led me to become interested in psychiatry, my fascination with the brain’s unique ability to adapt is what led me to neuroscience,” says Riggs.

After finishing high school, Riggs attended California State University in San Bernardino and became the only member of her family to attend university or attempt a four-year degree. Today, she spends her days working with mice that carry mutations linked to autism or ADHD in humans, studying the animals’ behavior and monitoring their neural activity. She expects that aberrant neural circuit activity in these conditions may also contribute to mood disorders, whose origins are harder to tease apart because they often arise when genetic and environmental factors intersect. Ultimately, Riggs says, she wants to understand how our genes dictate whether an experience will alter neural signaling and impact mental health in a long-lasting way.

Riggs uses patch clamp electrophysiology to record the strength of inhibitory and excitatory synaptic input onto individual neurons (white arrow) in an animal model of autism. Image: Lace Riggs

“If we understand how these long-lasting synaptic changes come about, then we might be able to leverage these mechanisms to develop new and more effective treatments.”

While the turmoil of her childhood is in the past, Riggs says it is not forgotten — in part, because of its lasting effects on her own mental health.  She talks openly about her ongoing struggle with social anxiety and complex post-traumatic stress disorder because she is passionate about dismantling the stigma surrounding these conditions. “It’s something I have to deal with every day,” Riggs says. That means coping with symptoms like difficulty concentrating, hypervigilance, and heightened sensitivity to stress. “It’s like a constant hum in the background of my life, it never stops,” she says.

“I urge all of us to strive, not only to make scientific discoveries to move the field forward,” says Riggs, “but to improve the accessibility of this career to those whose lived experiences are required to truly accomplish that goal.”

Making and breaking habits

As part of our Ask the Brain series, science writer Shafaq Zia explores the question, “How are habits formed in the brain?”

____

Have you ever wondered why it is so hard to break free of bad habits like nail biting or obsessive social networking?

When we repeat an action over and over again, the behavioral pattern becomes automated in our brain, according to Jill R. Crittenden, molecular biologist and scientific advisor at McGovern Institute for Brain Research at MIT. For over a decade, Crittenden worked as a research scientist in the lab of Ann Graybiel, where one of the key questions scientists are working to answer is, how are habits formed?

Making habits

To understand how certain actions get wired in our neural pathways, this team of McGovern researchers experimented with rats, who were trained to run down a maze to receive a reward. If they turned left, they would get rich chocolate milk and for turning right, only sugar water. With this, the scientists wanted to see whether these animals could “learn to associate a cue with which direction they should turn in the maze in order to get the chocolate milk reward.”

Over time, the rats grew extremely habitual in their behavior; “they always turned the the correct direction and the places where their paws touched, in a fairly long maze, were exactly the same every time,” said Crittenden.

This isn’t a coincidence. When we’re first learning to do something, the frontal lobe and basal ganglia of the brain are highly active and doing a lot of calculations. These brain regions work together to associate behaviors with thoughts, emotions, and, most importantly, motor movements. But when we repeat an action over and over again, like the rats running down the maze, our brains become more efficient and fewer neurons are required to achieve the goal. This means, the more you do something, the easier it gets to carry it out because the behavior becomes literally etched in our brain as our motor movements.

But habits are complicated and they come in many different flavors, according to Crittenden. “I think we don’t have a great handle on how the differences [in our many habits] are separable neurobiologically, and so people argue a lot about how do you know that something’s a habit.”

The easiest way for scientists to test this in rodents is to see if the animal engages in the behavior even in the absence of reward. In this particular experiment, the researchers take away the reward, chocolate milk, to see whether the rats continue to run down the maze correctly. And to take it even a step further, they mix the chocolate milk with lithium chloride, which would upset the rat’s stomach. Despite all this, the rats continue to run down the maze and turn left towards the chocolate milk, as they had learnt to do over and over again.

Breaking habits

So does that mean once a habit is formed, it is impossible to shake it? Not quite. But it is tough. Rewards are a key building block to forming habits because our dopamine levels surge when we learn that an action is unexpectedly rewarded. For example, when the rats first learn to run down the maze, they’re motivated to receive the chocolate milk.

But things get complicated once the habit is formed. Researchers have found that this dopamine surge in response to reward ceases after a behavior becomes a habit. Instead the brain begins to release dopamine at the first cue or action that was previously learned to lead to the reward, so we are motivated to engage in the full behavioral sequence anyway, even if the reward isn’t there anymore.

This means we don’t have as much self-control as we think we do, which may also be the reason why it’s so hard to break the cycle of addiction. “People will report that they know this is bad for them. They don’t want it. And nevertheless, they select that action,” said Crittenden.

One common method to break the behavior, in this case, is called extinction. This is where psychologists try to weaken the association between the cue and the reward that led to habit formation in the first place. For example, if the rat no longer associates the cue to run down the maze with a reward, it will stop engaging in that behavior.

So the next time you beat yourself up over being unable to stick to a diet or sleep at a certain time, give yourself some grace and know that with consistency, a new, healthier habit can be born.

How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

Microscopy technique reveals hidden nanostructures in cells and tissues

Press Mentions

Inside a living cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves in between the molecules.

MIT researchers have now developed a novel way to overcome this limitation and make those “invisible” molecules visible. Their technique allows them to “de-crowd” the molecules by expanding a cell or tissue sample before labeling the molecules, which makes the molecules more accessible to fluorescent tags.

This method, which builds on a widely used technique known as expansion microscopy previously developed at MIT, should allow scientists to visualize molecules and cellular structures that have never been seen before.

“It’s becoming clear that the expansion process will reveal many new biological discoveries. If biologists and clinicians have been studying a protein in the brain or another biological specimen, and they’re labeling it the regular way, they might be missing entire categories of phenomena,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Using this technique, Boyden and his colleagues showed that they could image a nanostructure found in the synapses of neurons. They also imaged the structure of Alzheimer’s-linked amyloid beta plaques in greater detail than has been possible before.

“Our technology, which we named expansion revealing, enables visualization of these nanostructures, which previously remained hidden, using hardware easily available in academic labs,” says Deblina Sarkar, an assistant professor in the Media Lab and one of the lead authors of the study.

The senior authors of the study are Boyden; Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory; and Thomas Blanpied, a professor of physiology at the University of Maryland. Other lead authors include Jinyoung Kang, an MIT postdoc, and Asmamaw Wassie, a recent MIT PhD recipient. The study appears today in Nature Biomedical Engineering.

De-crowding

Imaging a specific protein or other molecule inside a cell requires labeling it with a fluorescent tag carried by an antibody that binds to the target. Antibodies are about 10 nanometers long, while typical cellular proteins are usually about 2 to 5 nanometers in diameter, so if the target proteins are too densely packed, the antibodies can’t get to them.

This has been an obstacle to traditional imaging and also to the original version of expansion microscopy, which Boyden first developed in 2015. In the original version of expansion microscopy, researchers attached fluorescent labels to molecules of interest before they expanded the tissue. The labeling was done first, in part because the researchers had to use an enzyme to chop up proteins in the sample so the tissue could be expanded. This meant that the proteins couldn’t be labeled after the tissue was expanded.

To overcome that obstacle, the researchers had to find a way to expand the tissue while leaving the proteins intact. They used heat instead of enzymes to soften the tissue, allowing the tissue to expand 20-fold without being destroyed. Then, the separated proteins could be labeled with fluorescent tags after expansion.

With so many more proteins accessible for labeling, the researchers were able to identify tiny cellular structures within synapses, the connections between neurons that are densely packed with proteins. They labeled and imaged seven different synaptic proteins, which allowed them to visualize, in detail, “nanocolumns” consisting of calcium channels aligned with other synaptic proteins. These nanocolumns, which are believed to help make synaptic communication more efficient, were first discovered by Blanpied’s lab in 2016.

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” Kang says. “Until now there has been no tool to visualize synapses very well.”

New patterns

The researchers also used their new technique to image beta amyloid, a peptide that forms plaques in the brains of Alzheimer’s patients. Using brain tissue from mice, the researchers found that amyloid beta forms periodic nanoclusters, which had not been seen before. These clusters of amyloid beta also include potassium channels. The researchers also found amyloid beta molecules that formed helical structures along axons.

“In this paper, we don’t speculate as to what that biology might mean, but we show that it exists. That is just one example of the new patterns that we can see,” says Margaret Schroeder, an MIT graduate student who is also an author of the paper.

Sarkar says that she is fascinated by the nanoscale biomolecular patterns that this technology unveils. “With a background in nanoelectronics, I have developed electronic chips that require extremely precise alignment, in the nanofab. But when I see that in our brain Mother Nature has arranged biomolecules with such nanoscale precision, that really blows my mind,” she says.

Boyden and his group members are now working with other labs to study cellular structures such as protein aggregates linked to Parkinson’s and other diseases. In other projects, they are studying pathogens that infect cells and molecules that are involved in aging in the brain. Preliminary results from these studies have also revealed novel structures, Boyden says.

“Time and time again, you see things that are truly shocking,” he says. “It shows us how much we are missing with classical unexpanded staining.”

The researchers are also working on modifying the technique so they can image up to 20 proteins at a time. They are also working on adapting their process so that it can be used on human tissue samples.

Sarkar and her team, on the other hand, are developing tiny wirelessly powered nanoelectronic devices which could be distributed in the brain. They plan to integrate these devices with expansion revealing. “This can combine the intelligence of nanoelectronics with the nanoscopy prowess of expansion technology, for an integrated functional and structural understanding of the brain,” Sarkar says.

The research was funded by the National Institutes of Health, the National Science Foundation, the Ludwig Family Foundation, the JPB Foundation, the Open Philanthropy Project, John Doerr, Lisa Yang and the Tan-Yang Center for Autism Research at MIT, the U.S. Army Research Office, Charles Hieken, Tom Stocky, Kathleen Octavio, Lore McGovern, Good Ventures, and HHMI.