Season’s Greetings from the McGovern Institute

This year’s holiday video (shown above) was inspired by Ev Fedorenko’s July 2022 Nature Neuroscience paper, which found similar patterns of brain activation and language selectivity across speakers of 45 different languages.

Universal language network

Ev Fedorenko uses the widely translated book “Alice in Wonderland” to test brain responses to different languages. Photo: Caitlin Cunningham

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes. Although roughly 7,000 languages are currently spoken and signed across the globe, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts.

To truly understand the cognitive and neural mechanisms that allow us to learn and process such diverse languages, Fedorenko and her team scanned the brains of speakers of 45 different languages while they listened to Alice in Wonderland in their native language. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers — which suggests that the location and key properties of the language network appear to be universal.

The many languages of McGovern

English may be the primary language used by McGovern researchers, but more than 35 other languages are spoken by scientists and engineers at the McGovern Institute. Our holiday video features 30 of these researchers saying Happy New Year in their native (or learned) language. Below is the complete list of languages included in our video. Expand each accordion to learn more about the speaker of that particular language and the meaning behind their new year’s greeting.

Silent synapses are abundant in the adult brain

MIT neuroscientists have discovered that the adult brain contains millions of “silent synapses” — immature connections between neurons that remain inactive until they’re recruited to help form new memories.

Until now, it was believed that silent synapses were present only during early development, when they help the brain learn the new information that it’s exposed to early in life. However, the new MIT study revealed that in adult mice, about 30 percent of all synapses in the brain’s cortex are silent.

The existence of these silent synapses may help to explain how the adult brain is able to continually form new memories and learn new things without having to modify existing conventional synapses, the researchers say.

“These silent synapses are looking for new connections, and when important new information is presented, connections between the relevant neurons are strengthened. This lets the brain create new memories without overwriting the important memories stored in mature synapses, which are harder to change,” says Dimitra Vardalaki, an MIT graduate student and the lead author of the new study.

Mark Harnett, an associate professor of brain and cognitive sciences and an investigator at the McGovern Institute for Brain Research, is the senior author of the paper, which appears today in Nature. Kwanghun Chung, an associate professor of chemical engineering at MIT, is also an author.

A surprising discovery

When scientists first discovered silent synapses decades ago, they were seen primarily in the brains of young mice and other animals. During early development, these synapses are believed to help the brain acquire the massive amounts of information that babies need to learn about their environment and how to interact with it. In mice, these synapses were believed to disappear by about 12 days of age (equivalent to the first months of human life).

However, some neuroscientists have proposed that silent synapses may persist into adulthood and help with the formation of new memories. Evidence for this has been seen in animal models of addiction, which is thought to be largely a disorder of aberrant learning.

Theoretical work in the field from Stefano Fusi and Larry Abbott of Columbia University has also proposed that neurons must display a wide range of different plasticity mechanisms to explain how brains can both efficiently learn new things and retain them in long-term memory. In this scenario, some synapses must be established or modified easily, to form the new memories, while others must remain much more stable, to preserve long-term memories.

In the new study, the MIT team did not set out specifically to look for silent synapses. Instead, they were following up on an intriguing finding from a previous study in Harnett’s lab. In that paper, the researchers showed that within a single neuron, dendrites — antenna-like extensions that protrude from neurons — can process synaptic input in different ways, depending on their location.

As part of that study, the researchers tried to measure neurotransmitter receptors in different dendritic branches, to see if that would help to account for the differences in their behavior. To do that, they used a technique called eMAP (epitope-preserving Magnified Analysis of the Proteome), developed by Chung. Using this technique, researchers can physically expand a tissue sample and then label specific proteins in the sample, making it possible to obtain super-high-resolution images.

The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere.

While they were doing that imaging, they made a surprising discovery. “The first thing we saw, which was super bizarre and we didn’t expect, was that there were filopodia everywhere,” Harnett says.

Filopodia, thin membrane protrusions that extend from dendrites, have been seen before, but neuroscientists didn’t know exactly what they do. That’s partly because filopodia are so tiny that they are difficult to see using traditional imaging techniques.

After making this observation, the MIT team set out to try to find filopodia in other parts of the adult brain, using the eMAP technique. To their surprise, they found filopodia in the mouse visual cortex and other parts of the brain, at a level 10 times higher than previously seen. They also found that filopodia had neurotransmitter receptors called NMDA receptors, but no AMPA receptors.

A typical active synapse has both of these types of receptors, which bind the neurotransmitter glutamate. NMDA receptors normally require cooperation with AMPA receptors to pass signals because NMDA receptors are blocked by magnesium ions at the normal resting potential of neurons. Thus, when AMPA receptors are not present, synapses that have only NMDA receptors cannot pass along an electric current and are referred to as “silent.”

Unsilencing synapses

To investigate whether these filopodia might be silent synapses, the researchers used a modified version of an experimental technique known as patch clamping. This allowed them to monitor the electrical activity generated at individual filopodia as they tried to stimulate them by mimicking the release of the neurotransmitter glutamate from a neighboring neuron.

Using this technique, the researchers found that glutamate would not generate any electrical signal in the filopodium receiving the input, unless the NMDA receptors were experimentally unblocked. This offers strong support for the theory the filopodia represent silent synapses within the brain, the researchers say.

The researchers also showed that they could “unsilence” these synapses by combining glutamate release with an electrical current coming from the body of the neuron. This combined stimulation leads to accumulation of AMPA receptors in the silent synapse, allowing it to form a strong connection with the nearby axon that is releasing glutamate.

The researchers found that converting silent synapses into active synapses was much easier than altering mature synapses.

“If you start with an already functional synapse, that plasticity protocol doesn’t work,” Harnett says. “The synapses in the adult brain have a much higher threshold, presumably because you want those memories to be pretty resilient. You don’t want them constantly being overwritten. Filopodia, on the other hand, can be captured to form new memories.”

“Flexible and robust”

The findings offer support for the theory proposed by Abbott and Fusi that the adult brain includes highly plastic synapses that can be recruited to form new memories, the researchers say.

“This paper is, as far as I know, the first real evidence that this is how it actually works in a mammalian brain,” Harnett says. “Filopodia allow a memory system to be both flexible and robust. You need flexibility to acquire new information, but you also need stability to retain the important information.”

The researchers are now looking for evidence of these silent synapses in human brain tissue. They also hope to study whether the number or function of these synapses is affected by factors such as aging or neurodegenerative disease.

“It’s entirely possible that by changing the amount of flexibility you’ve got in a memory system, it could become much harder to change your behaviors and habits or incorporate new information,” Harnett says. “You could also imagine finding some of the molecular players that are involved in filopodia and trying to manipulate some of those things to try to restore flexible memory as we age.”

The research was funded by the Boehringer Ingelheim Fonds, the National Institutes of Health, the James W. and Patricia T. Poitras Fund at MIT, a Klingenstein-Simons Fellowship, and Vallee Foundation Scholarship, and a McKnight Scholarship.

How touch dampens the brain’s response to painful stimuli

McGovern Investigator Fan Wang. Photo: Caitliin Cunningham

When we press our temples to soothe an aching head or rub an elbow after an unexpected blow, it often brings some relief. It is believed that pain-responsive cells in the brain quiet down when these neurons also receive touch inputs, say scientists at MIT’s McGovern Institute, who for the first time have watched this phenomenon play out in the brains of mice.

The team’s discovery, reported November 16, 2022, in the journal Science Advances, offers researchers a deeper understanding of the complicated relationship between pain and touch and could offer some insights into chronic pain in humans. “We’re interested in this because it’s a common human experience,” says McGovern Investigator Fan Wang. “When some part of your body hurts, you rub it, right? We know touch can alleviate pain in this way.” But, she says, the phenomenon has been very difficult for neuroscientists to study.

Modeling pain relief

Touch-mediated pain relief may begin in the spinal cord, where prior studies have found pain-responsive neurons whose signals are dampened in response to touch. But there have been hints that the brain was involved too. Wang says this aspect of the response has been largely unexplored, because it can be hard to monitor the brain’s response to painful stimuli amidst all the other neural activity happening there—particularly when an animal moves.

So while her team knew that mice respond to a potentially painful stimulus on the cheek by wiping their faces with their paws, they couldn’t follow the specific pain response in the animals’ brains to see if that rubbing helped settle it down. “If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal,” Wang explains.

She and her colleagues have found a way around this obstacle. Instead of studying the effects of face-rubbing, they have focused their attention on a subtler form of touch: the gentle vibrations produced by the movement of the animals’ whiskers. Mice use their whiskers to explore, moving them back and forth in a rhythmic motion known as whisking to feel out their environment. This motion activates touch receptors in the face and sends information to the brain in the form of vibrotactile signals. The human brain receives the same kind of touch signals when a person shakes their hand as they pull it back from a painfully hot pan—another way we seek touch-mediate pain relief.

If you look at the brain when an animal is rubbing the face, movement and touch signals completely overwhelm any possible pain signal, says Wang.

Wang and her colleagues found that this whisker movement alters the way mice respond to bothersome heat or a poke on the face—both of which usually lead to face rubbing. “When the unpleasant stimuli were applied in the presence of their self-generated vibrotactile whisking…they respond much less,” she says. Sometimes, she says, whisking animals entirely ignore these painful stimuli.

In the brain’s somatosensory cortex, where touch and pain signals are processed, the team found signaling changes that seem to underlie this effect. “The cells that preferentially respond to heat and poking are less frequently activated when the mice are whisking,” Wang says. “They’re less likely to show responses to painful stimuli.” Even when whisking animals did rub their faces in response to painful stimuli, the team found that neurons in the brain took more time to adopt the firing patterns associated with that rubbing movement. “When there is a pain stimulation, usually the trajectory the population dynamics quickly moved to wiping. But if you already have whisking, that takes much longer,” Wang says.

Wang notes that even in the fraction of a second before provoked mice begin rubbing their faces, when the animals are relatively still, it can be difficult to sort out which brain signals are related to perceiving heat and poking and which are involved in whisker movement. Her team developed computational tools to disentangle these, and are hoping other neuroscientists will use the new algorithms to make sense of their own data.

Whisking’s effects on pain signaling seem to depend on dedicated touch-processing circuitry that sends tactile information to the somatosensory cortex from a brain region called the ventral posterior thalamus. When the researchers blocked that pathway, whisking no longer dampened the animals’ response to painful stimuli. Now, Wang says, she and her team are eager to learn how this circuitry works with other parts of the brain to modulate the perception and response to painful stimuli.

Wang says the new findings might shed light on a condition called thalamic pain syndrome, a chronic pain disorder that can develop in patients after a stroke that affects the brain’s thalamus. “Such strokes may impair the functions of thalamic circuits that normally relay pure touch signals and dampen painful signals to the cortex,” she says.

Personal pursuits

This story originally appeared in the Fall 2022 issue of BrainScan.

***

Many neuroscientists were drawn to their careers out of curiosity and wonder. Their deep desire to understand how the brain works drew them into the lab and keeps them coming back, digging deeper and exploring more each day. But for some, the work is more personal.

Several McGovern faculty say they entered their field because someone in their lives was dealing with a brain disorder that they wanted to better understand. They are committed to unraveling the basic biology of those conditions, knowing that knowledge is essential to guide the development of better treatments.

The distance from basic research to clinical progress is shortening, and many young neuroscientists hope not just to deepen scientific understanding of the brain, but to have direct impact on the lives of patients. Some want to know why people they love are suffering from neurological disorders or mental illness; others seek to understand the ways in which their own brains work differently than others. But above all, they want better treatments for people affected by such disorders.

Seeking answers

That’s true for Kian Caplan, a graduate student in MIT’s Department of Brain and Cognitive Sciences who was diagnosed with Tourette syndrome around age 13. At the time, learning that the repetitive, uncontrollable movements and vocal tics he had been making for most of his life were caused by a neurological disorder was something of a relief. But it didn’t take long for Caplan to realize his diagnosis came with few answers.

Graduate student Kian Caplan studies the brain circuits associated with Tourette syndrome and obsessive-compulsive disorder in Guoping Feng and Fan Wang’s labs at the McGovern Institute. Photo: Steph Stevens

Tourette syndrome has been estimated to occur in about six of every 1,000 children, but its neurobiology remains poorly understood.

“The doctors couldn’t really explain why I can’t control the movements and sounds I make,” he says. “They couldn’t really explain why my symptoms wax and wane, or why the tics I have aren’t always the same.”

That lack of understanding is not just frustrating for curious kids like Caplan. It means that researchers have been unable to develop treatments that target the root cause of Tourette syndrome. Drugs that dampen signaling in parts of the brain that control movement can help suppress tics, but not without significant side effects. Caplan has tried those drugs. For him, he says, “they’re not worth the suppression.”

Advised by Fan Wang and McGovern Associate Director Guoping Feng, Caplan is looking for answers. A mouse model of obsessive-compulsive disorder developed in Feng’s lab was recently found to exhibit repetitive movements similar to those of people with Tourette syndrome, and Caplan is working to characterize those tic-like movements. He will use the mouse model to examine the brain circuits underlying the two conditions, which often co-occur in people. Broadly, researchers think Tourette syndrome arises due to dysregulation of cortico-striatal-thalamo-cortical circuits, which connect distant parts of the brain to control movement. Caplan and Wang suspect that the brainstem — a structure found where the brain connects to the spinal cord, known for organizing motor movement into different modules — is probably involved, too.

Wang’s research group studies the brainstem’s role in movement, but she says that like most researchers, she hadn’t considered its role in Tourette syndrome until Caplan joined her lab. That’s one reason Caplan, who has long been a mentor and advocate for students with neurodevelopmental disorders, thinks neuroscience needs more neurodiversity.

“I think we need more representation in basic science research by the people who actually live with those conditions,” he says. Their experiences can lead to insights that may be inaccessible to others, he says, but significant barriers in academia often prevent this kind of representation. Caplan wants to see institutions make systemic changes to ensure that neurodiverse and otherwise minority individuals are able to thrive in academia. “I’m not an exception,” he says, “there should be more people like me here, but the present system makes that incredibly difficult.”

Overcoming adversity

Like Caplan, Lace Riggs faced significant challenges in her pursuit to study the brain. She grew up in Southern California’s Inland Empire, where issues of social disparity, chronic stress, drug addiction, and mental illness were a part of everyday life.

Postdoctoral fellow Lace Riggs studies the origins of neurodevelopmental conditions in Guoping Feng’s lab at the McGovern Institute. Photo: Lace Riggs

“Living in severe poverty and relying on government assistance without access to adequate education and resources led everyone I know and love to suffer tremendously, myself included,” says Riggs, a postdoctoral fellow in the Feng lab.

“There are not a lot of people like me who make it to this stage,” says Riggs, who has lost friends and family members to addiction, mental illness, and suicide. “There’s a reason for that,” she adds. “It’s really, really difficult to get through the educational system and to overcome socioeconomic barriers.”

Today, Riggs is investigating the origins of neurodevelopmental conditions, hoping to pave the way to better treatments for brain disorders by uncovering the molecular changes that alter the structure and function of neural circuits.

Riggs says that the adversities she faced early in life offered valuable insights in the pursuit of these goals. She first became interested in the brain because she wanted to understand how our experiences have a lasting impact on who we are — including in ways that leave people vulnerable to psychiatric problems.

“While the need for more effective treatments led me to become interested in psychiatry, my fascination with the brain’s unique ability to adapt is what led me to neuroscience,” says Riggs.

After finishing high school, Riggs attended California State University in San Bernardino and became the only member of her family to attend university or attempt a four-year degree. Today, she spends her days working with mice that carry mutations linked to autism or ADHD in humans, studying the animals’ behavior and monitoring their neural activity. She expects that aberrant neural circuit activity in these conditions may also contribute to mood disorders, whose origins are harder to tease apart because they often arise when genetic and environmental factors intersect. Ultimately, Riggs says, she wants to understand how our genes dictate whether an experience will alter neural signaling and impact mental health in a long-lasting way.

Riggs uses patch clamp electrophysiology to record the strength of inhibitory and excitatory synaptic input onto individual neurons (white arrow) in an animal model of autism. Image: Lace Riggs

“If we understand how these long-lasting synaptic changes come about, then we might be able to leverage these mechanisms to develop new and more effective treatments.”

While the turmoil of her childhood is in the past, Riggs says it is not forgotten — in part, because of its lasting effects on her own mental health.  She talks openly about her ongoing struggle with social anxiety and complex post-traumatic stress disorder because she is passionate about dismantling the stigma surrounding these conditions. “It’s something I have to deal with every day,” Riggs says. That means coping with symptoms like difficulty concentrating, hypervigilance, and heightened sensitivity to stress. “It’s like a constant hum in the background of my life, it never stops,” she says.

“I urge all of us to strive, not only to make scientific discoveries to move the field forward,” says Riggs, “but to improve the accessibility of this career to those whose lived experiences are required to truly accomplish that goal.”

Making and breaking habits

As part of our Ask the Brain series, science writer Shafaq Zia explores the question, “How are habits formed in the brain?”

____

Have you ever wondered why it is so hard to break free of bad habits like nail biting or obsessive social networking?

When we repeat an action over and over again, the behavioral pattern becomes automated in our brain, according to Jill R. Crittenden, molecular biologist and scientific advisor at McGovern Institute for Brain Research at MIT. For over a decade, Crittenden worked as a research scientist in the lab of Ann Graybiel, where one of the key questions scientists are working to answer is, how are habits formed?

Making habits

To understand how certain actions get wired in our neural pathways, this team of McGovern researchers experimented with rats, who were trained to run down a maze to receive a reward. If they turned left, they would get rich chocolate milk and for turning right, only sugar water. With this, the scientists wanted to see whether these animals could “learn to associate a cue with which direction they should turn in the maze in order to get the chocolate milk reward.”

Over time, the rats grew extremely habitual in their behavior; “they always turned the the correct direction and the places where their paws touched, in a fairly long maze, were exactly the same every time,” said Crittenden.

This isn’t a coincidence. When we’re first learning to do something, the frontal lobe and basal ganglia of the brain are highly active and doing a lot of calculations. These brain regions work together to associate behaviors with thoughts, emotions, and, most importantly, motor movements. But when we repeat an action over and over again, like the rats running down the maze, our brains become more efficient and fewer neurons are required to achieve the goal. This means, the more you do something, the easier it gets to carry it out because the behavior becomes literally etched in our brain as our motor movements.

But habits are complicated and they come in many different flavors, according to Crittenden. “I think we don’t have a great handle on how the differences [in our many habits] are separable neurobiologically, and so people argue a lot about how do you know that something’s a habit.”

The easiest way for scientists to test this in rodents is to see if the animal engages in the behavior even in the absence of reward. In this particular experiment, the researchers take away the reward, chocolate milk, to see whether the rats continue to run down the maze correctly. And to take it even a step further, they mix the chocolate milk with lithium chloride, which would upset the rat’s stomach. Despite all this, the rats continue to run down the maze and turn left towards the chocolate milk, as they had learnt to do over and over again.

Breaking habits

So does that mean once a habit is formed, it is impossible to shake it? Not quite. But it is tough. Rewards are a key building block to forming habits because our dopamine levels surge when we learn that an action is unexpectedly rewarded. For example, when the rats first learn to run down the maze, they’re motivated to receive the chocolate milk.

But things get complicated once the habit is formed. Researchers have found that this dopamine surge in response to reward ceases after a behavior becomes a habit. Instead the brain begins to release dopamine at the first cue or action that was previously learned to lead to the reward, so we are motivated to engage in the full behavioral sequence anyway, even if the reward isn’t there anymore.

This means we don’t have as much self-control as we think we do, which may also be the reason why it’s so hard to break the cycle of addiction. “People will report that they know this is bad for them. They don’t want it. And nevertheless, they select that action,” said Crittenden.

One common method to break the behavior, in this case, is called extinction. This is where psychologists try to weaken the association between the cue and the reward that led to habit formation in the first place. For example, if the rat no longer associates the cue to run down the maze with a reward, it will stop engaging in that behavior.

So the next time you beat yourself up over being unable to stick to a diet or sleep at a certain time, give yourself some grace and know that with consistency, a new, healthier habit can be born.

How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

The craving state

This story originally appeared in the Winter 2022 issue of BrainScan.

***

For people struggling with substance use disorders — and there are about 35 million of them worldwide — treatment options are limited. Even among those who seek help, relapse is common. In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A 2019 survey found that 1.6 million people nationwide had an opioid use disorder, and the crisis has surged since the start of the COVID-19 pandemic. The Centers for Disease Control and Prevention estimates that more than 100,000 people died of drug overdose between April 2020 and April 2021 — nearly 30 percent more overdose deaths than occurred during the same period the previous year.

In the United States, an epidemic of opioid addiction has been declared a public health emergency.

A deeper understanding of what addiction does to the brain and body is urgently needed to pave the way to interventions that reliably release affected individuals from its grip. At the McGovern Institute, researchers are turning their attention to addiction’s driving force: the deep, recurring craving that makes people prioritize drug use over all other wants and needs.

McGovern Institute co-founder, Lore Harp McGovern.

“When you are in that state, then it seems nothing else matters,” says McGovern Investigator Fan Wang. “At that moment, you can discard everything: your relationship, your house, your job, everything. You only want the drug.”

With a new addiction initiative catalyzed by generous gifts from Institute co-founder Lore Harp McGovern and others, McGovern scientists with diverse expertise have come together to begin clarifying the neurobiology that underlies the craving state. They plan to dissect the neural transformations associated with craving at every level — from the drug-induced chemical changes that alter neuronal connections and activity to how these modifications impact signaling brain-wide. Ultimately, the McGovern team hopes not just to understand the craving state, but to find a way to relieve it — for good.

“If we can understand the craving state and correct it, or at least relieve a little bit of the pressure,” explains Wang, who will help lead the addiction initiative, “then maybe we can at least give people a chance to use their top-down control to not take the drug.”

The craving cycle

For individuals suffering from substance use disorders, craving fuels a cyclical pattern of escalating drug use. Following the euphoria induced by a drug like heroin or cocaine, depression sets in, accompanied by a drug craving motivated by the desire to relieve that suffering. And as addiction progresses, the peaks and valleys of this cycle dip lower: the pleasant feelings evoked by the drug become weaker, while the negative effects a person experiences in its absence worsen. The craving remains, and increasing use of the drug are required to relieve it.

By the time addiction sets in, the brain has been altered in ways that go beyond a drug’s immediate effects on neural signaling.

These insidious changes leave individuals susceptible to craving — and the vulnerable state endures. Long after the physical effects of withdrawal have subsided, people with substance use disorders can find their craving returns, triggered by exposure to a small amount of the drug, physical or social cues associated with previous drug use, or stress. So researchers will need to determine not only how different parts of the brain interact with one another during craving and how individual cells and the molecules within them are affected by the craving state — but also how things change as addiction develops and progresses.

Circuits, chemistry and connectivity

One clear starting point is the circuitry the brain uses to control motivation. Thanks in part to decades of research in the lab of McGovern Investigator Ann Graybiel, neuroscientists know a great deal about how these circuits learn which actions lead to pleasure and which lead to pain, and how they use that information to establish habits and evaluate the costs and benefits of complex decisions.

Graybiel’s work has shown that drugs of abuse strongly activate dopamine-responsive neurons in a part of the brain called the striatum, whose signals promote habit formation. By increasing the amount of dopamine that neurons release, these drugs motivate users to prioritize repeated drug use over other kinds of rewards, and to choose the drug in spite of pain or other negative effects. Her group continues to investigate the naturally occurring molecules that control these circuits, as well as how they are hijacked by drugs of abuse.

Distribution of opioid receptors targeted by morphine (shown in blue) in two regions in the dorsal striatum and nucleus accumbens of the mouse brain. Image: Ann Graybiel

In Fan Wang’s lab, work investigating the neural circuits that mediate the perception of physical pain has led her team to question the role of emotional pain in craving. As they investigated the source of pain sensations in the brain, they identified neurons in an emotion-regulating center called the central amygdala that appear to suppress physical pain in animals. Now, Wang wants to know whether it might be possible to modulate neurons involved in emotional pain to ameliorate the negative state that provokes drug craving.

These animal studies will be key to identifying the cellular and molecular changes that set the brain up for recurring cravings. And as McGovern scientists begin to investigate what happens in the brains of rodents that have been trained to self-administer addictive drugs like fentanyl or cocaine, they expect to encounter tremendous complexity.

McGovern Associate Investigator Polina Anikeeva, whose lab has pioneered new technologies that will help the team investigate the full spectrum of changes that underlie craving, says it will be important to consider impacts on the brain’s chemistry, firing patterns, and connectivity. To that end, multifunctional research probes developed in her lab will be critical to monitoring and manipulating neural circuits in animal models.

Imaging technology developed by investigator Ed Boyden will also enable nanoscale protein visualization brain-wide. An important goal will be to identify a neural signature of the craving state. With such a signal, researchers can begin to explore how to shut off that craving — possibly by directly modulating neural signaling.

Targeted treatments

“One of the reasons to study craving is because it’s a natural treatment point,” says McGovern Associate Investigator Alan Jasanoff. “And the dominant kind of approaches that people in our team think about are approaches that relate to neural circuits — to the specific connections between brain regions and how those could be changed.” The hope, he explains, is that it might be possible to identify a brain region whose activity is disrupted during the craving state, then use clinical brain stimulation methods to restore normal signaling — within that region, as well as in other connected parts of the brain.

To identify the right targets for such a treatment, it will be crucial to understand how the biology uncovered in laboratory animals reflects what’s happens in people with substance use disorders. Functional imaging in John Gabrieli’s lab can help bridge the gap between clinical and animal research by revealing patterns of brain activity associated with the craving state in both humans and rodents. A new technique developed in Jasanoff’s lab makes it possible to focus on the activity between specific regions of an animal’s brain. “By doing that, we hope to build up integrated models of how information passes around the brain in craving states, and of course also in control states where we’re not experiencing craving,” he explains.

In delving into the biology of the craving state, McGovern scientists are embarking on largely unexplored territory — and they do so with both optimism and urgency. “It’s hard to not appreciate just the size of the problem, and just how devastating addiction is,” says Anikeeva. “At this point, it just seems almost irresponsible to not work on it, especially when we do have the tools and we are interested in the general brain regions that are important for that problem. I would say that there’s almost a civic duty.”

Study finds a striking difference between neurons of humans and other mammals

McGovern Institute Investigator Mark Harnett. Photo: Justin Knight

Neurons communicate with each other via electrical impulses, which are produced by ion channels that control the flow of ions such as potassium and sodium. In a surprising new finding, MIT neuroscientists have shown that human neurons have a much smaller number of these channels than expected, compared to the neurons of other mammals.

The researchers hypothesize that this reduction in channel density may have helped the human brain evolve to operate more efficiently, allowing it to divert resources to other energy-intensive processes that are required to perform complex cognitive tasks.

“If the brain can save energy by reducing the density of ion channels, it can spend that energy on other neuronal or circuit processes,” says Mark Harnett, an associate professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Harnett and his colleagues analyzed neurons from 10 different mammals, the most extensive electrophysiological study of its kind, and identified a “building plan” that holds true for every species they looked at — except for humans. They found that as the size of neurons increases, the density of channels found in the neurons also increases.

However, human neurons proved to be a striking exception to this rule.

“Previous comparative studies established that the human brain is built like other mammalian brains, so we were surprised to find strong evidence that human neurons are special,” says former MIT graduate student Lou Beaulieu-Laroche.

Beaulieu-Laroche is the lead author of the study, which appears today in Nature.

A building plan

Neurons in the mammalian brain can receive electrical signals from thousands of other cells, and that input determines whether or not they will fire an electrical impulse called an action potential. In 2018, Harnett and Beaulieu-Laroche discovered that human and rat neurons differ in some of their electrical properties, primarily in parts of the neuron called dendrites — tree-like antennas that receive and process input from other cells.

One of the findings from that study was that human neurons had a lower density of ion channels than neurons in the rat brain. The researchers were surprised by this observation, as ion channel density was generally assumed to be constant across species. In their new study, Harnett and Beaulieu-Laroche decided to compare neurons from several different mammalian species to see if they could find any patterns that governed the expression of ion channels. They studied two types of voltage-gated potassium channels and the HCN channel, which conducts both potassium and sodium, in layer 5 pyramidal neurons, a type of excitatory neurons found in the brain’s cortex.

 

Former McGovern Institute graduate student Lou Beaulieu-Laroche is the lead author of the 2021 Nature paper.

They were able to obtain brain tissue from 10 mammalian species: Etruscan shrews (one of the smallest known mammals), gerbils, mice, rats, Guinea pigs, ferrets, rabbits, marmosets, and macaques, as well as human tissue removed from patients with epilepsy during brain surgery. This variety allowed the researchers to cover a range of cortical thicknesses and neuron sizes across the mammalian kingdom.

The researchers found that in nearly every mammalian species they looked at, the density of ion channels increased as the size of the neurons went up. The one exception to this pattern was in human neurons, which had a much lower density of ion channels than expected.

The increase in channel density across species was surprising, Harnett says, because the more channels there are, the more energy is required to pump ions in and out of the cell. However, it started to make sense once the researchers began thinking about the number of channels in the overall volume of the cortex, he says.

In the tiny brain of the Etruscan shrew, which is packed with very small neurons, there are more neurons in a given volume of tissue than in the same volume of tissue from the rabbit brain, which has much larger neurons. But because the rabbit neurons have a higher density of ion channels, the density of channels in a given volume of tissue is the same in both species, or any of the nonhuman species the researchers analyzed.

“This building plan is consistent across nine different mammalian species,” Harnett says. “What it looks like the cortex is trying to do is keep the numbers of ion channels per unit volume the same across all the species. This means that for a given volume of cortex, the energetic cost is the same, at least for ion channels.”

Energy efficiency

The human brain represents a striking deviation from this building plan, however. Instead of increased density of ion channels, the researchers found a dramatic decrease in the expected density of ion channels for a given volume of brain tissue.

The researchers believe this lower density may have evolved as a way to expend less energy on pumping ions, which allows the brain to use that energy for something else, like creating more complicated synaptic connections between neurons or firing action potentials at a higher rate.

“We think that humans have evolved out of this building plan that was previously restricting the size of cortex, and they figured out a way to become more energetically efficient, so you spend less ATP per volume compared to other species,” Harnett says.

He now hopes to study where that extra energy might be going, and whether there are specific gene mutations that help neurons of the human cortex achieve this high efficiency. The researchers are also interested in exploring whether primate species that are more closely related to humans show similar decreases in ion channel density.

The research was funded by the Natural Sciences and Engineering Research Council of Canada, a Friends of the McGovern Institute Fellowship, the National Institute of General Medical Sciences, the Paul and Daisy Soros Fellows Program, the Dana Foundation David Mahoney Neuroimaging Grant Program, the National Institutes of Health, the Harvard-MIT Joint Research Grants Program in Basic Neuroscience, and Susan Haar.

Other authors of the paper include Norma Brown, an MIT technical associate; Marissa Hansen, a former post-baccalaureate scholar; Enrique Toloza, a graduate student at MIT and Harvard Medical School; Jitendra Sharma, an MIT research scientist; Ziv Williams, an associate professor of neurosurgery at Harvard Medical School; Matthew Frosch, an associate professor of pathology and health sciences and technology at Harvard Medical School; Garth Rees Cosgrove, director of epilepsy and functional neurosurgery at Brigham and Women’s Hospital; and Sydney Cash, an assistant professor of neurology at Harvard Medical School and Massachusetts General Hospital.

McGovern Institute Director receives highest honor from the Society for Neuroscience

The Society for Neuroscience will present its highest honor, the Ralph W. Gerard Prize in Neuroscience, to McGovern Institute Director Robert Desimone at its annual meeting today.

The Gerard Prize is named for neuroscientist Ralph W. Gerard who helped establish the Society for Neuroscience, and honors “outstanding scientists who have made significant contributions to neuroscience throughout their careers.” Desimone will share the $30,000 prize with Vanderbilt University neuroscientist Jon Kaas.

Desimone is being recognized for his career contributions to understanding cortical function in the visual system. His seminal work on attention spans decades, including the discovery of a neural basis for covert attention in the temporal cortex and the creation of the biased competition model, suggesting that attention is biased towards material relevant to the task. More recent work revealed how synchronized brain rhythms help enhance visual processing. Desimone also helped discover both face cells and neural populations that identify objects even when the size or location of the object changes. His long list of contributions includes mapping the extrastriate visual cortex, publishing the first report of columns for motion processing outside the primary visual cortex, and discovering how the temporal cortex retains memories. Desimone’s work has moved the field from broad strokes of input and output to a more nuanced understanding of cortical function that allows the brain to make sense of the environment.

At its annual meeting, beginning today, the Society will honor Desimone and other leading researchers who have made significant contributions to neuroscience — including the understanding of cognitive processes, drug addiction, neuropharmacology, and theoretical models — with this year’s Outstanding Achievement Awards.

“The Society is honored to recognize this year’s awardees, whose groundbreaking research has revolutionized our understanding of the brain, from the level of the synapse to the structure and function of the cortex, shedding light on how vision, memory, perception of touch and pain, and drug
addiction are organized in the brain,” SfN President Barry Everitt, said. “This exceptional group of neuroscientists has made fundamental discoveries, paved the way for new therapeutic approaches, and introduced new tools that will lay the foundation for decades of research to come.”

A connectome for cognition

The lateral prefrontal cortex is a particularly well-connected part of the brain. Neurons there communicate with processing centers throughout the rest of the brain, gathering information and sending commands to implement executive control over behavior. Now, scientists at MIT’s McGovern Institute have mapped these connections and revealed an unexpected order within them: The lateral prefrontal cortex, they’ve found, contains maps of other major parts of the brain’s cortex.

The researchers, led by postdoctoral researcher Rui Xu and McGovern Institute Director Robert Desimone, report that the lateral prefrontal cortex contains a set of maps that represent the major processing centers in the other parts of the cortex, including the temporal and parietal lobes. Their organization likely supports the lateral prefrontal cortex’s roles managing complex functions such as attention and working memory, which require integrating information from multiple sources and coordinating activity elsewhere in the brain. The findings are published November 4, 2021, in the journal Neuron.

Topographic maps

The layout of the maps, which allows certain regions of the lateral prefrontal cortex to directly interact with multiple areas across the brain, indicates that this part of the brain is particularly well positioned for its role. “This function of integrating and then sending back control signals to appropriate levels in the processing hierarchies of the brain is clearly one of the reasons that prefrontal cortex is so important for cognition and executive control,” says Desimone.

In many parts of the brain, neurons’ physical organization has been found to reflect the information represented there. For example, individual neurons’ positions within the visual cortex mirror the layout of the cells in the retina from which they receive input, such that the spatial pattern of neuronal activity in this part of the brain provides an approximate view of the image seen by the eyes. For example, if you fixate on the first letter of a word, the next letters in the word will map to sequential locations in the visual cortex. Likewise, the arm and hand are mapped to adjacent locations in the somatic cortex, where the brain receives sensory information from the skin.

Topographic maps such as these, which have been found primarily in brain regions involved in sensory and motor processing, offer clues about how information is stored and processed in the brain. Neuroscientists have hoped that topographic maps within the lateral prefrontal cortex will provide insight into the complex cognitive processes that are carried out there—but such maps have been elusive.

Previous anatomical studies had given little indication how different parts of the brain communicate preferentially to specific locations within the prefrontal cortex to give rise to regional specialization of cognitive functions. Recently, however, the Desimone lab identified two areas within the lateral prefrontal cortex of monkeys with specific roles in focusing an animal’s visual attention. Knowing that some spots within the lateral prefrontal cortex were wired for specific functions, they wondered if others were, too. They decided they needed a detailed map of the connections emanating from this part of the brain, and devised a plan to plot connectivity from hundreds of points within the lateral prefrontal cortex.

Cortical connectome

To generate a wiring diagram, or connectome, Xu used functional MRI to monitor activity throughout a monkey’s brain as he stimulated specific points within its lateral prefrontal cortex. He moved systematically through the brain region, stimulating points spaced as close as one millimeter apart, and noting which parts of the brain lit up in response. Ultimately, the team collected data from about 100 sites for each of two monkeys.

As the data accumulated, clear patterns emerged. Different regions within the lateral prefrontal cortex formed orderly connections with each of five processing centers throughout the brain. Points within each of these maps connected to sites with the same relative positions in the distant processing centers. Because some parts of the lateral prefrontal cortex are wired to interact with more than one processing centers, these maps overlap, positioning the prefrontal cortex to integrate information from different sources.

The team found significant overlap, for example, between the maps of the temporal cortex, a part of the brain that uses visual information to recognize objects, and the parietal cortex, which computes the spatial relationships between objects. “It is mapping objects and space together in a way that would integrate the two systems,” explains Desimone. “And then on top of that, it has other maps of other brain systems that are partially overlapping with that—so they’re all sort of coming together.”

Desimone and Xu say the new connectome will help guide further investigations of how the prefrontal cortex orchestrates complex cognitive processes. “I think this really gives us a direction for the future, because we now need to understand the cognitive concepts that are mapped there,” Desimone says.

Already, they say, the connectome offers encouragement that a deeper understanding of complex cognition is within reach. “This topographic connectivity gives the lateral prefrontal some specific advantage to serve its function,” says Xu. “This suggests that lateral prefrontal cortex has a fine organization, just like the more studied parts of the brain, so the approaches that have been used to study these other regions may also benefit the studies of high-level cognition.”