Scientists discover how mutations in a language gene produce speech deficits

Mutations of a gene called Foxp2 have been linked to a type of speech disorder called apraxia that makes it difficult to produce sequences of sound. A new study from MIT and National Yang Ming Chiao Tung University sheds light on how this gene controls the ability to produce speech.

In a study of mice, the researchers found that mutations in Foxp2 disrupt the formation of dendrites and neuronal synapses in the brain’s striatum, which plays important roles in the control of movement. Mice with these mutations also showed impairments in their ability to produce the high-frequency sounds that they use to communicate with other mice.

Those malfunctions arise because Foxp2 mutations prevent the proper assembly of motor proteins, which move molecules within cells, the researchers found.

“These mice have abnormal vocalizations, and in the striatum there are many cellular abnormalities,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and an author of the paper. “This was an exciting finding. Who would have thought that a speech problem might come from little motors inside cells?”

Fu-Chin Liu PhD ’91, a professor at National Yang Ming Chiao Tung University in Taiwan, is the senior author of the study, which appears today in the journal Brain. Liu and Graybiel also worked together on a 2016 study of the potential link between Foxp2 and autism spectrum disorder. The lead authors of the new Brain paper are Hsiao-Ying Kuo and Shih-Yun Chen of National Yang Ming Chiao Tung University.

Speech control

Children with Foxp2-associated apraxia tend to begin speaking later than other children, and their speech is often difficult to understand. The disorder is believed to arise from impairments in brain regions, such as the striatum, that control the movements of the lips, mouth, and tongue. Foxp2 is also expressed in the brains of songbirds such as zebra finches and is critical to those birds’ ability to learn songs.

Foxp2 encodes a transcription factor, meaning that it can control the expression of many other target genes. Many species express Foxp2, but humans have a special form of Foxp2. In a 2014 study, Graybiel and colleagues found evidence that the human form of Foxp2, when expressed in mice, allowed the mice to accelerate the switch from declarative to procedural types of learning.

In that study, the researchers showed that mice engineered to express the human version of Foxp2, which differs from the mouse version by only two DNA base pairs, were much better at learning mazes and performing other tasks that require turning repeated actions into behavioral routines. Mice with human-like Foxp2 also had longer dendrites — the slender extensions that help neurons form synapses — in the striatum, which is involved in habit formation as well as motor control.

In the new study, the researchers wanted to explore how the Foxp2 mutation that has been linked with apraxia affects speech production, using ultrasonic vocalizations in mice as a proxy for speech. Many rodents and other animals such as bats produce these vocalizations to communicate with each other.

While previous studies, including the work by Liu and Graybiel in 2016, had suggested that Foxp2 affects dendrite growth and synapse formation, the mechanism for how that occurs was not known. In the new study, led by Liu, the researchers investigated one proposed mechanism, which is that Foxp2 affects motor proteins.

One of these molecular motors is the dynein protein complex, a large cluster of proteins that is responsible for shuttling molecules along microtubule scaffolds within cells.

“All kinds of molecules get shunted around to different places in our cells, and that’s certainly true of neurons,” Graybiel says. “There’s an army of tiny molecules that move molecules around in the cytoplasm or put them into the membrane. In a neuron, they may send molecules from the cell body all the way down the axons.”

A delicate balance

The dynein complex is made up of several other proteins. The most important of these is a protein called dynactin1, which interacts with microtubules, enabling the dynein motor to move along microtubules. In the new study, the researchers found that dynactin1 is one of the major targets of the Foxp2 transcription factor.

The researchers focused on the striatum, one of the regions where Foxp2 is most often found, and showed that the mutated version of Foxp2 is unable to suppress dynactin1 production. Without that brake in place, cells generate too much dynactin1. This upsets the delicate balance of dynein-dynactin1, which prevents the dynein motor from moving along microtubules.

Those motors are needed to shuttle molecules that are necessary for dendrite growth and synapse formation on dendrites. With those molecules stranded in the cell body, neurons are unable to form synapses to generate the proper electrophysiological signals they need to make speech production possible.

Mice with the mutated version of Foxp2 had abnormal ultrasonic vocalizations, which typically have a frequency of around 22 to 50 kilohertz. The researchers showed that they could reverse these vocalization impairments and the deficits in the molecular motor activity, dendritic growth, and electrophysiological activity by turning down the gene that encodes dynactin1.

Mutations of Foxp2 can also contribute to autism spectrum disorders and Huntington’s disease, through mechanisms that Liu and Graybiel previously studied in their 2016 paper and that many other research groups are now exploring. Liu’s lab is also investigating the potential role of abnormal Foxp2 expression in the subthalamic nucleus of the brain as a possible factor in Parkinson’s disease.

The research was funded by the Ministry of Science and Technology of Taiwan, the Ministry of Education of Taiwan, the U.S. National Institute of Mental Health, the Saks Kavanaugh Foundation, the Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Stephen and Anne Kott.

Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Mapping language networks

The precise locations and shapes of language areas differ across individuals, so to find the language network, researchers ask each person to perform a language task while scanning their brains with functional magnetic resonance imaging (fMRI). Listening to or reading sentences in one’s native language should activate the language network. To distinguish this network from other brain regions, researchers also ask participants to perform tasks that should not activate it, such as listening to an unfamiliar language or solving math problems.

Several years ago, Fedorenko began designing these “localizer” tasks for speakers of languages other than English. While most studies of the language network have used English speakers as subjects, English does not include many features commonly seen in other languages. For example, in English, word order tends to be fixed, while in other languages there is more flexibility in how words are ordered. Many of those languages instead use the addition of morphemes, or segments of words, to convey additional meaning and relationships between words.

“There has been growing awareness for many years of the need to look at more languages, if you want make claims about how language works, as opposed to how English works,” Fedorenko says. “We thought it would be useful to develop tools to allow people to rigorously study language processing in the brain in other parts of the world. There’s now access to brain imaging technologies in many countries, but the basic paradigms that you would need to find the language-responsive areas in a person are just not there.”

For the new study, the researchers performed brain imaging of two speakers of 45 different languages, representing 12 different language families. Their goal was to see if key properties of the language network, such as location, left lateralization, and selectivity, were the same in those participants as in people whose native language is English.

The researchers decided to use “Alice in Wonderland” as the text that everyone would listen to, because it is one of the most widely translated works of fiction in the world. They selected 24 short passages and three long passages, each of which was recorded by a native speaker of the language. Each participant also heard nonsensical passages, which should not activate the language network, and was asked to do a variety of other cognitive tasks that should not activate it.

The team found that the language networks of participants in this study were found in approximately the same brain regions, and had the same selectivity, as those of native speakers of English.

“Language areas are selective,” Malik-Moraleda says. “They shouldn’t be responding during other tasks such as a spatial working memory task, and that was what we found across the speakers of 45 languages that we tested.”

Additionally, language regions that are typically activated together in English speakers, such as the frontal language areas and temporal language areas, were similarly synchronized in speakers of other languages.

The researchers also showed that among all of the subjects, the small amount of variation they saw between individuals who speak different languages was the same as the amount of variation that would typically be seen between native English speakers.

Similarities and differences

While the findings suggest that the overall architecture of the language network is similar across speakers of different languages, that doesn’t mean that there are no differences at all, Fedorenko says. As one example, researchers could now look for differences in speakers of languages that predominantly use morphemes, rather than word order, to help determine the meaning of a sentence.

“There are all sorts of interesting questions you can ask about morphological processing that don’t really make sense to ask in English, because it has much less morphology,” Fedorenko says.

Another possibility is studying whether speakers of languages that use differences in tone to convey different word meanings would have a language network with stronger links to auditory brain regions that encode pitch.

Right now, Fedorenko’s lab is working on a study in which they are comparing the ‘temporal receptive fields’ of speakers of six typologically different languages, including Turkish, Mandarin, and Finnish. The temporal receptive field is a measure of how many words the language processing system can handle at a time, and for English, it has been shown to be six to eight words long.

“The language system seems to be working on chunks of just a few words long, and we’re trying to see if this constraint is universal across these other languages that we’re testing,” Fedorenko says.

The researchers are also working on creating language localizer tasks and finding study participants representing additional languages beyond the 45 from this study.

The research was funded by the National Institutes of Health and research funds from MIT’s Department of Brain and Cognitive Sciences, the McGovern Institute, and the Simons Center for the Social Brain. Malik-Moraleda was funded by a la Caixa Fellowship and a Friends of McGovern fellowship.

The pursuit of reward

View the interactive version of this story in our Spring 2021 issue of BrainScan.

The brain circuits that influence our decisions, cognitive functions, and ultimately, our actions are intimately connected with the circuits that give rise to our motivations. By exploring these relationships, scientists at McGovern are seeking knowledge that might suggest new strategies for changing our habits or treating motivation-disrupting conditions such as depression and addiction.

Risky decisions

MIT Institute Professor Ann Graybiel. Photo: Justin Knight

In Ann Graybiel’s lab, researchers have been examining how the brain makes choices that carry both positive and negative consequences — deciding to take on a higher-paying but more demanding job, for example. Psychologists call these dilemmas approach-avoidance conflicts, and resolving them not only requires weighing the good versus the bad, but also motivation to engage with the decision.

Emily Hueske, a research scientist in the Graybiel lab, explains that everyone has their own risk tolerance when it comes to such decisions, and certain psychiatric conditions, including depression and anxiety disorders, can shift the tipping point at which a person chooses to “approach” or “avoid.”

Studies have shown that neurons in the striatum (see image below), a region deep in the brain involved in both motivation and movement, activate as we grapple with these decisions. Graybiel traced this activity even further, to tiny compartments within the striatum called striosomes.

(She discovered striosomes many years ago and has been studying their function for decades.)

A motivational switch

In 2015, Graybiel’s team manipulated striosome signaling within genetically engineered mice and changed the way animals behave in approach-avoidance conflict situations. Taking cues from an assessment used to evaluate approach-avoidance behavior in patients, they presented mice with opportunities to obtain chocolate while experiencing unwelcome exposure in a brightly lit area.

Experimentally activating neurons in striosomes had a dramatic effect, causing mice to venture into brightly lit areas that they would normally avoid. With striosomal circuits switched on, “this animal all of a sudden is like a different creature,” Graybiel says.

Two years later, they found that chronic stress and other factors can also disrupt this signaling and change the choices animals make.

An image of the mouse striatum showing clusters of striosomes (red and yellow). Image: Graybiel lab

Age of ennui

This November, Alexander Friedman, who worked as a research scientist in the Graybiel lab, and Hueske reported in Cell that they found an age-related decline in motivation-modulated learning in mice and rats. Neurons within striosomes became more active than the cells that surround them as animals learned to assign positive and negative values to potential choices. And older mice were less engaged than their younger counterparts in the type of learning required to make these cost-benefit analyses. A similar lack of motivation was observed in a mouse model of Huntington’s disease, a neurodegenerative disorder that is often associated with mood
disturbances in patients.

“This coincides with our previous findings that striosomes are critically important for decisions that involve a conflict.”

“This coincides with our previous findings that striosomes are critically important for decisions that involve a conflict,” says Friedman, who is now an assistant professor at the University of Texas at El Paso.

Graybiel’s team is continuing to investigate these uniquely positioned compartments in the brain, expecting to shed light on the mechanisms that underlie both learning and motivation.

“There’s no learning without motivation, and in fact, motivation can be influenced by learning,” Hueske says. “The more you learn, the more excited you might be to engage in the task. So the two are intertwined.”

The aging brain

Researchers in John Gabrieli’s lab are also seeking to understand the circuits that link motivation to learning, and recently, his team reported that they, too, had found an age-related decline in motivation-modulated learning.

Studies in young adults have shown that memory improves when the brain circuits that process motivation and memory interact. Gabrieli and neurologist Maiya Geddes, who worked in Gabrieli’s lab as a postdoctoral fellow, wondered whether this holds true in older adults, particularly as memory declines.

To find out, the team recruited 40 people to participate in a brain imaging study. About half of the participants were between the ages of 18 and 30, while the others were between the ages of 49 and 84. While inside an fMRI scanner, each participant was asked to commit certain words to memory and told their success would determine how much money they received for participating in the experiment.

Diminished drive

MRI scan
Younger adults show greater activation in the reward-related regions of the brain during incentivized memory tasks compared to older adults. Image: Maiya Geddes

Not surprisingly, when participants were asked 24 hours later to recall the words, the younger group performed better overall than the older group. In young people, incentivized memory tasks triggered activity in parts of the brain involved in both memory and motivation. But in older adults, while these two parts of the brain could be activated independently, they did not seem to be communicating with one another.

“It seemed that the older adults, at least in terms of their brain response, did care about the kind of incentives that we were offering,” says Geddes, who is now an assistant professor at McGill University. “But for whatever reason, that wasn’t allowing them to benefit in terms of improved memory performance.”

Since the study indicates the brain still can anticipate potential rewards, Geddes is now exploring whether other sources of motivation, such as social rewards, might more effectively increase healthful decisions and behaviors in older adults.

Circuit control

Understanding how the brain generates and responds to motivation is not only important for improving learning strategies. Lifestyle choices such as exercise and social engagement can help people preserve cognitive function and improve their quality of life as they age, and Gabrieli says activating the right motivational circuits could help encourage people to implement healthy changes.

By pinpointing these motivational circuits in mice, Graybiel hopes that her research will lead to better treatment strategies for people struggling with motivational challenges, including Parkinson’s disease. Her team is now exploring whether striosomes serve as part of a value-sensitive switch, linking our intentions to dopamine-containing neurons in the midbrain that can modulate our actions.

“Perhaps this motivation is critical for the conflict resolution, and striosomes combine two worlds, dopaminergic motivation and cortical knowledge, resulting in motivation to learn,” Friedman says.

“Now we know that these challenges have a biological basis, and that there are neural circuits that can promote or reduce our feeling of motivational energy,” explains Graybiel. “This realization in itself is a major step toward learning how we can control these circuits both behaviorally and by highly selective therapeutic targeting.”

Tool developed in Graybiel lab reveals new clues about Parkinson’s disease

As the brain processes information, electrical charges zip through its circuits and neurotransmitters pass molecular messages from cell to cell. Both forms of communication are vital, but because they are usually studied separately, little is known about how they work together to control our actions, regulate mood, and perform the other functions of a healthy brain.

Neuroscientists in Ann Graybiel’s laboratory at MIT’s McGovern Institute are taking a closer look at the relationship between these electrical and chemical signals. “Considering electrical signals side by side with chemical signals is really important to understand how the brain works,” says Helen Schwerdt, a postdoctoral researcher in Graybiel’s lab. Understanding that relationship is also crucial for developing better ways to diagnose and treat nervous system disorders and mental illness, she says, noting that the drugs used to treat these conditions typically aim to modulate the brain’s chemical signaling, yet studies of brain activity are more likely to focus on electrical signals, which are easier to measure.

Schwerdt and colleagues in Graybiel’s lab have developed new tools so that chemical and electrical signals can, for the first time, be measured simultaneously in the brains of primates. In a study published September 25, 2020, in Science Advances, they used those tools to reveal an unexpectedly complex relationship between two types of signals that are disrupted in patients with Parkinson’s disease—dopamine signaling and coordinated waves of electrical activity known as beta-band oscillations.

Complicated relationship

Graybiel’s team focused its attention on beta-band activity and dopamine signaling because studies of patients with Parkinson’s disease had suggested a straightforward inverse relationship between the two. The tremors, slowness of movement, and other symptoms associated with the disease develop and progress as the brain’s production of the neurotransmitter dopamine declines, and at the same time, beta-band oscillations surge to abnormal levels. Beta-band oscillations are normally observed in parts of the brain that control movement when a person is paying attention or planning to move. It’s not clear what they do or why they are disrupted in patients with Parkinson’s disease. But because patients’ symptoms tend to be worst when beta activity is high—and because beta activity can be measured in real time with sensors placed on the scalp or with a deep-brain stimulation device that has been implanted for treatment, researchers have been hopeful that it might be useful for monitoring the disease’s progression and patients’ response to treatment. In fact, clinical trials are already underway to explore the effectiveness of modulating deep-brain stimulation treatment based on beta activity.

When Schwerdt and colleagues examined these two types of signals in the brains of rhesus macaques, they discovered that the relationship between beta activity and dopamine is more complicated than previously thought.

Their new tools allowed them to simultaneously monitor both signals with extraordinary precision, targeting specific parts of the striatum—a region deep within the brain involved in controlling movement, where dopamine is particularly abundant—and taking measurements on the millisecond time scale to capture neurons’ rapid-fire communications.

They took these measurements as the monkeys performed a simple task, directing their gaze in a particular direction in anticipation of a reward. This allowed the researchers to track chemical and electrical signaling during the active, motivated movement of the animals’ eyes. They found that beta activity did increase as dopamine signaling declined—but only in certain parts of the striatum and during certain tasks. The reward value of a task, an animal’s past experiences, and the particular movement the animal performed all impacted the relationship between the two types of signals.

Multi-modal systems allow subsecond recording of chemical and electrical neural signals in the form of dopamine molecular concentrations and beta-band local field potentials (beta LFPs), respectively. Online measurements of dopamine and beta LFP (time-dependent traces displayed in box on right) were made in the primate striatum (caudate nucleus and putamen colored in green and purple, respectively, in the left brain image) as the animal was performing a task in which eye movements were made to cues displayed on the left (purple event marker line) and right (green event) of a screen in order to receive large or small amounts of food reward (red and blue events). Dopamine and beta LFP neural signals are centrally implicated in Parkinson’s disease and other brain disorders. Image: Helen Schwerdt

“What we expected is there in the overall view, but if we just look at a different level of resolution, all of a sudden the rules don’t hold,” says Graybiel, who is also an MIT Institute Professor. “It doesn’t destroy the likelihood that one would want to have a treatment related to this presumed opposite relationship, but it does say there’s something more here that we haven’t known about.”

The researchers say it’s important to investigate this more nuanced relationship between dopamine signaling and beta activity, and that understanding it more deeply might lead to better treatments for patients with Parkinson’s disease and related disorders. While they plan to continue to examine how the two types of signals relate to one another across different parts of the brain and under different behavioral conditions, they hope that other teams will also take advantage of the tools they have developed. “As these methods in neuroscience become more and more precise and dazzling in their power, we’re bound to discover new things,” says Graybiel.

This study was supported by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, the Army Research Office, the Saks Kavanaugh Foundation, the National Science Foundation, Kristin R. Pressman and Jessica J. Pourian ’13 Fund, and Robert Buxton.

Researchers achieve remote control of hormone release

Abnormal levels of stress hormones such as adrenaline and cortisol are linked to a variety of mental health disorders, including depression and posttraumatic stress disorder (PTSD). MIT researchers have now devised a way to remotely control the release of these hormones from the adrenal gland, using magnetic nanoparticles.

This approach could help scientists to learn more about how hormone release influences mental health, and could eventually offer a new way to treat hormone-linked disorders, the researchers say.

“We’re looking how can we study and eventually treat stress disorders by modulating peripheral organ function, rather than doing something highly invasive in the central nervous system,” says Polina Anikeeva, an MIT professor of materials science and engineering and of brain and cognitive sciences.

To achieve control over hormone release, Dekel Rosenfeld, an MIT-Technion postdoc in Anikeeva’s group, has developed specialized magnetic nanoparticles that can be injected into the adrenal gland. When exposed to a weak magnetic field, the particles heat up slightly, activating heat-responsive channels that trigger hormone release. This technique can be used to stimulate an organ deep in the body with minimal invasiveness.

Anikeeva and Alik Widge, an assistant professor of psychiatry at the University of Minnesota and a former research fellow at MIT’s Picower Institute for Learning and Memory, are the senior authors of the study. Rosenfeld is the lead author of the paper, which appears today in Science Advances.

Controlling hormones

Anikeeva’s lab has previously devised several novel magnetic nanomaterials, including particles that can release drugs at precise times in specific locations in the body.

In the new study, the research team wanted to explore the idea of treating disorders of the brain by manipulating organs that are outside the central nervous system but influence it through hormone release. One well-known example is the hypothalamic-pituitary-adrenal (HPA) axis, which regulates stress response in mammals. Hormones secreted by the adrenal gland, including cortisol and adrenaline, play important roles in depression, stress, and anxiety.

“Some disorders that we consider neurological may be treatable from the periphery, if we can learn to modulate those local circuits rather than going back to the global circuits in the central nervous system,” says Anikeeva, who is a member of MIT’s Research Laboratory of Electronics and McGovern Institute for Brain Research.

As a target to stimulate hormone release, the researchers decided on ion channels that control the flow of calcium into adrenal cells. Those ion channels can be activated by a variety of stimuli, including heat. When calcium flows through the open channels into adrenal cells, the cells begin pumping out hormones. “If we want to modulate the release of those hormones, we need to be able to essentially modulate the influx of calcium into adrenal cells,” Rosenfeld says.

Unlike previous research in Anikeeva’s group, in this study magnetothermal stimulation was applied to modulate the function of cells without artificially introducing any genes.

To stimulate these heat-sensitive channels, which naturally occur in adrenal cells, the researchers designed nanoparticles made of magnetite, a type of iron oxide that forms tiny magnetic crystals about 1/5000 the thickness of a human hair. In rats, they found these particles could be injected directly into the adrenal glands and remain there for at least six months. When the rats were exposed to a weak magnetic field — about 50 millitesla, 100 times weaker than the fields used for magnetic resonance imaging (MRI) — the particles heated up by about 6 degrees Celsius, enough to trigger the calcium channels to open without damaging any surrounding tissue.

The heat-sensitive channel that they targeted, known as TRPV1, is found in many sensory neurons throughout the body, including pain receptors. TRPV1 channels can be activated by capsaicin, the organic compound that gives chili peppers their heat, as well as by temperature. They are found across mammalian species, and belong to a family of many other channels that are also sensitive to heat.

This stimulation triggered a hormone rush — doubling cortisol production and boosting noradrenaline by about 25 percent. That led to a measurable increase in the animals’ heart rates.

Treating stress and pain

The researchers now plan to use this approach to study how hormone release affects PTSD and other disorders, and they say that eventually it could be adapted for treating such disorders. This method would offer a much less invasive alternative to potential treatments that involve implanting a medical device to electrically stimulate hormone release, which is not feasible in organs such as the adrenal glands that are soft and highly vascularized, the researchers say.

Another area where this strategy could hold promise is in the treatment of pain, because heat-sensitive ion channels are often found in pain receptors.

“Being able to modulate pain receptors with this technique potentially will allow us to study pain, control pain, and have some clinical applications in the future, which hopefully may offer an alternative to medications or implants for chronic pain,” Anikeeva says. With further investigation of the existence of TRPV1 in other organs, the technique can potentially be extended to other peripheral organs such as the digestive system and the pancreas.

The research was funded by the U.S. Defense Advance Research Projects Agency ElectRx Program, a Bose Research Grant, the National Institutes of Health BRAIN Initiative, and a MIT-Technion fellowship.

How the brain encodes landmarks that help us navigate

When we move through the streets of our neighborhood, we often use familiar landmarks to help us navigate. And as we think to ourselves, “OK, now make a left at the coffee shop,” a part of the brain called the retrosplenial cortex (RSC) lights up.

While many studies have linked this brain region with landmark-based navigation, exactly how it helps us find our way is not well-understood. A new study from MIT neuroscientists now reveals how neurons in the RSC use both visual and spatial information to encode specific landmarks.

“There’s a synthesis of some of these signals — visual inputs and body motion — to represent concepts like landmarks,” says Mark Harnett, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “What we went after in this study is the neuron-level and population-level representation of these different aspects of spatial navigation.”

In a study of mice, the researchers found that this brain region creates a “landmark code” by combining visual information about the surrounding environment with spatial feedback of the mice’s own position along a track. Integrating these two sources of information allowed the mice to learn where to find a reward, based on landmarks that they saw.

“We believe that this code that we found, which is really locked to the landmarks, and also gives the animals a way to discriminate between landmarks, contributes to the animals’ ability to use those landmarks to find rewards,” says Lukas Fischer, an MIT postdoc and the lead author of the study.

Harnett is the senior author of the study, which appears today in the journal eLife. Other authors are graduate student Raul Mojica Soto-Albors and recent MIT graduate Friederike Buck.

Encoding landmarks

Previous studies have found that people with damage to the RSC have trouble finding their way from one place to another, even though they can still recognize their surroundings. The RSC is also one of the first areas affected in Alzheimer’s patients, who often have trouble navigating.

The RSC is wedged between the primary visual cortex and the motor cortex, and it receives input from both of those areas. It also appears to be involved in combining two types of representations of space — allocentric, meaning the relationship of objects to each other, and egocentric, meaning the relationship of objects to the viewer.

“The evidence suggests that RSC is really a place where you have a fusion of these different frames of reference,” Harnett says. “Things look different when I move around in the room, but that’s because my vantage point has changed. They’re not changing with respect to one another.”

In this study, the MIT team set out to analyze the behavior of individual RSC neurons in mice, including how they integrate multiple inputs that help with navigation. To do that, they created a virtual reality environment for the mice by allowing them to run on a treadmill while they watch a video screen that makes it appear they are running along a track. The speed of the video is determined by how fast the mice run.

At specific points along the track, landmarks appear, signaling that there’s a reward available a certain distance beyond the landmark. The mice had to learn to distinguish between two different landmarks, and to learn how far beyond each one they had to run to get the reward.

Once the mice learned the task, the researchers recorded neural activity in the RSC as the animals ran along the virtual track. They were able to record from a few hundred neurons at a time, and found that most of them anchored their activity to a specific aspect of the task.

There were three primary anchoring points: the beginning of the trial, the landmark, and the reward point. The majority of the neurons were anchored to the landmarks, meaning that their activity would consistently peak at a specific point relative to the landmark, say 50 centimeters before it or 20 centimeters after it.

Most of those neurons responded to both of the landmarks, but a small subset responded to only one or the other. The researchers hypothesize that those strongly selective neurons help the mice to distinguish between the landmarks and run the correct distance to get the reward.

When the researchers used optogenetics (a tool that can turn off neuron activity) to block activity in the RSC, the mice’s performance on the task became much worse.

Combining inputs

The researchers also did an experiment in which the mice could choose to run or not while the video played at a constant speed, unrelated to the mice’s movement. The mice could still see the landmarks, but the location of the landmarks was no longer linked to a reward or to the animals’ own behavior. In that situation, RSC neurons did respond to the landmarks, but not as strongly as they did when the mice were using them for navigation.

Further experiments allowed the researchers to tease out just how much neuron activation is produced by visual input (seeing the landmarks) and by feedback on the mouse’s own movement. However, simply adding those two numbers yielded totals much lower than the neuron activity seen when the mice were actively navigating the track.

“We believe that is evidence for a mechanism of nonlinear integration of these inputs, where they get combined in a way that creates a larger response than what you would get if you just added up those two inputs in a linear fashion,” Fischer says.

The researchers now plan to analyze data that they have already collected on how neuron activity evolves over time as the mice learn the task. They also hope to perform further experiments in which they could try to separately measure visual and spatial inputs into different locations within RSC neurons.

The research was funded by the National Institutes of Health, the McGovern Institute, the NEC Corporation Fund for Research in Computers and Communications at MIT, and the Klingenstein-Simons Fellowship in Neuroscience.

Controlling our internal world

Olympic skaters can launch, perform multiple aerial turns, and land gracefully, anticipating imperfections and reacting quickly to correct course. To make such elegant movements, the brain must have an internal model of the body to control, predict, and make almost instantaneous adjustments to motor commands. So-called “internal models” are a fundamental concept in engineering and have long been suggested to underlie control of movement by the brain, but what about processes that occur in the absence of movement, such as contemplation, anticipation, planning?

Using a novel combination of task design, data analysis, and modeling, MIT neuroscientist Mehrdad Jazayeri and colleagues now provide compelling evidence that the core elements of an internal model also control purely mental processes in a study published in Nature Neuroscience.

“During my thesis I realized that I’m interested, not so much in how our senses react to sensory inputs, but instead in how my internal model of the world helps me make sense of those inputs,”says Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Indeed, understanding the building blocks exerting control of such mental processes could help to paint a better picture of disruptions in mental disorders, such as schizophrenia.

Internal models for mental processes

Scientists working on the motor system have long theorized that the brain overcomes noisy and slow signals using an accurate internal model of the body. This internal model serves three critical functions: it provides motor to control movement, simulates upcoming movement to overcome delays, and uses feedback to make real-time adjustments.

“The framework that we currently use to think about how the brain controls our actions is one that we have borrowed from robotics: we use controllers, simulators, and sensory measurements to control machines and train operators,” explains Reza Shadmehr, a professor at the Johns Hopkins School of Medicine who was not involved with the study. “That framework has largely influenced how we imagine our brain controlling our movements.”

Jazazyeri and colleagues wondered whether the same framework might explain the control principles governing mental states in the absence of any movement.

“When we’re simply sitting, thoughts and images run through our heads and, fundamental to intellect, we can control them,” explains lead author Seth Egger, a former postdoctoral associate in the Jazayeri lab and now at Duke University.

“We wanted to find out what’s happening between our ears when we are engaged in thinking,” says Egger.

Imagine, for example, a sign language interpreter keeping up with a fast speaker. To track speech accurately, the translator continuously anticipates where the speech is going, rapidly adjusting when the actual words deviate from the prediction. The interpreter could be using an internal model to anticipate upcoming words, and use feedback to make adjustments on the fly.

1-2-3…Go

Hypothesizing about how the components of an internal model function in scenarios such as translation is one thing. Cleanly measuring and proving the existence of these elements is much more complicated as the activity of the controller, simulator, and feedback are intertwined. To tackle this problem, Jazayeri and colleagues devised a clever task with primate models in which the controller, simulator, and feedback act at distinct times.

In this task, called “1-2-3-Go,” the animal sees three consecutive flashes (1, 2, and 3) that form a regular beat, and learns to make an eye movement (Go) when they anticipate the 4th flash should occur. During the task, researchers measured neural activity in a region of the frontal cortex they had previously linked to the timing of movement.

Jazayeri and colleagues had clear predictions about when the controller would act (between the third flash and “Go”) and when feedback would be engaged (with each flash of light). The key surprise came when researchers saw evidence for the simulator anticipating the third flash. This unexpected neural activity has dynamics that resemble the controller, but was not associated with a response. In other words, the researchers uncovered a covert plan that functions as the simulator, thus uncovering all three elements of an internal model for a mental process, the planning and anticipation of “Go” in the “1-2-3-Go” sequence.

“Jazayeri’s work is important because it demonstrates how to study mental simulation in animals,” explains Shadmehr, “and where in the brain that simulation is taking place.”

Having found how and where to measure an internal model in action, Jazayeri and colleagues now plan to ask whether these control strategies can explain how primates effortlessly generalize their knowledge from one behavioral context to another. For example, how does an interpreter rapidly adjust when someone with widely different speech habits takes the podium? This line of investigation promises to shed light on high-level mental capacities of the primate brain that simpler animals seem to lack, that go awry in mental disorders, and that designers of artificial intelligence systems so fondly seek.

Benefits of mindfulness for middle schoolers

Two new studies from investigators at the McGovern Institute at MIT suggest that mindfulness — the practice of focusing one’s awareness on the present moment — can enhance academic performance and mental health in middle schoolers. The researchers found that more mindfulness correlates with better academic performance, fewer suspensions from school, and less stress.

“By definition, mindfulness is the ability to focus attention on the present moment, as opposed to being distracted by external things or internal thoughts. If you’re focused on the teacher in front of you, or the homework in front of you, that should be good for learning,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

The researchers also showed, for the first time, that mindfulness training can alter brain activity in students. Sixth-graders who received mindfulness training not only reported feeling less stressed, but their brain scans revealed reduced activation of the amygdala, a brain region that processes fear and other emotions, when they viewed images of fearful faces.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” Gabrieli says. “It’s a form of mental exercise that needs to be sustained.”

Together, the findings suggest that offering mindfulness training in schools could benefit many students, says Gabrieli, who is the senior author of both studies.

“We think there is a reasonable possibility that mindfulness training would be beneficial for children as part of the daily curriculum in their classroom,” he says. “What’s also appealing about mindfulness is that there are pretty well-established ways of teaching it.”

In the moment

Both studies were performed at charter schools in Boston. In one of the papers, which appears today in the journal Behavioral Neuroscience, the MIT team studied about 100 sixth-graders. Half of the students received mindfulness training every day for eight weeks, while the other half took a coding class. The mindfulness exercises were designed to encourage students to pay attention to their breath, and to focus on the present moment rather than thoughts of the past or the future.

Students who received the mindfulness training reported that their stress levels went down after the training, while the students in the control group did not. Students in the mindfulness training group also reported fewer negative feelings, such as sadness or anger, after the training.

About 40 of the students also participated in brain imaging studies before and after the training. The researchers measured activity in the amygdala as the students looked at pictures of faces expressing different emotions.

At the beginning of the study, before any training, students who reported higher stress levels showed more amygdala activity when they saw fearful faces. This is consistent with previous research showing that the amygdala can be overactive in people who experience more stress, leading them to have stronger negative reactions to adverse events.

“There’s a lot of evidence that an overly strong amygdala response to negative things is associated with high stress in early childhood and risk for depression,” Gabrieli says.

After the mindfulness training, students showed a smaller amygdala response when they saw the fearful faces, consistent with their reports that they felt less stressed. This suggests that mindfulness training could potentially help prevent or mitigate mood disorders linked with higher stress levels, the researchers say.

Richard Davidson, a professor of psychology and psychiatry at the University of Wisconsin, says that the findings suggest there could be great benefit to implementing mindfulness training in middle schools.

“This is really one of the very first rigorous studies with children of that age to demonstrate behavioral and neural benefits of a simple mindfulness training,” says Davidson, who was not involved in the study.

Evaluating mindfulness

In the other paper, which appeared in the journal Mind, Brain, and Education in June, the researchers did not perform any mindfulness training but used a questionnaire to evaluate mindfulness in more than 2,000 students in grades 5-8. The questionnaire was based on the Mindfulness Attention Awareness Scale, which is often used in mindfulness studies on adults. Participants are asked to rate how strongly they agree with statements such as “I rush through activities without being really attentive to them.”

The researchers compared the questionnaire results with students’ grades, their scores on statewide standardized tests, their attendance rates, and the number of times they had been suspended from school. Students who showed more mindfulness tended to have better grades and test scores, as well as fewer absences and suspensions.

“People had not asked that question in any quantitative sense at all, as to whether a more mindful child is more likely to fare better in school,” Gabrieli says. “This is the first paper that says there is a relationship between the two.”

The researchers now plan to do a full school-year study, with a larger group of students across many schools, to examine the longer-term effects of mindfulness training. Shorter programs like the two-month training used in the Behavioral Neuroscience study would most likely not have a lasting impact, Gabrieli says.

“Mindfulness is like going to the gym. If you go for a month, that’s good, but if you stop going, the effects won’t last,” he says. “It’s a form of mental exercise that needs to be sustained.”

The research was funded by the Walton Family Foundation, the Poitras Center for Psychiatric Disorders Research at the McGovern Institute for Brain Research, and the National Council of Science and Technology of Mexico. Camila Caballero ’13, now a graduate student at Yale University, is the lead author of the Mind, Brain, and Education study. Caballero and MIT postdoc Clemens Bauer are lead authors of the Behavioral Neuroscience study. Additional collaborators were from the Harvard Graduate School of Education, Transforming Education, Boston Collegiate Charter School, and Calmer Choice.

A chemical approach to imaging cells from the inside

A team of researchers at the McGovern Institute and Broad Institute of MIT and Harvard have developed a new technique for mapping cells. The approach, called DNA microscopy, shows how biomolecules such as DNA and RNA are organized in cells and tissues, revealing spatial and molecular information that is not easily accessible through other microscopy methods. DNA microscopy also does not require specialized equipment, enabling large numbers of samples to be processed simultaneously.

“DNA microscopy is an entirely new way of visualizing cells that captures both spatial and genetic information simultaneously from a single specimen,” says first author Joshua Weinstein, a postdoctoral associate at the Broad Institute. “It will allow us to see how genetically unique cells — those comprising the immune system, cancer, or the gut, for instance — interact with one another and give rise to complex multicellular life.”

The new technique is described in Cell. Aviv Regev, core institute member and director of the Klarman Cell Observatory at the Broad Institute and professor of biology at MIT, and Feng Zhang, core institute member of the Broad Institute, investigator at the McGovern Institute for Brain Research at MIT, and the James and Patricia Poitras Professor of Neuroscience at MIT, are co-authors. Regev and Zhang are also Howard Hughes Medical Institute Investigators.

The evolution of biological imaging

In recent decades, researchers have developed tools to collect molecular information from tissue samples, data that cannot be captured by either light or electron microscopes. However, attempts to couple this molecular information with spatial data — to see how it is naturally arranged in a sample — are often machinery-intensive, with limited scalability.

DNA microscopy takes a new approach to combining molecular information with spatial data, using DNA itself as a tool.

To visualize a tissue sample, researchers first add small synthetic DNA tags, which latch on to molecules of genetic material inside cells. The tags are then replicated, diffusing in “clouds” across cells and chemically reacting with each other, further combining and creating more unique DNA labels. The labeled biomolecules are collected, sequenced, and computationally decoded to reconstruct their relative positions and a physical image of the sample.

The interactions between these DNA tags enable researchers to calculate the locations of the different molecules — somewhat analogous to cell phone towers triangulating the locations of different cell phones in their vicinity. Because the process only requires standard lab tools, it is efficient and scalable.

In this study, the authors demonstrate the ability to molecularly map the locations of individual human cancer cells in a sample by tagging RNA molecules. DNA microscopy could be used to map any group of molecules that will interact with the synthetic DNA tags, including cellular genomes, RNA, or proteins with DNA-labeled antibodies, according to the team.

“DNA microscopy gives us microscopic information without a microscope-defined coordinate system,” says Weinstein. “We’ve used DNA in a way that’s mathematically similar to photons in light microscopy. This allows us to visualize biology as cells see it and not as the human eye does. We’re excited to use this tool in expanding our understanding of genetic and molecular complexity.”

Funding for this study was provided by the Simons Foundation, Klarman Cell Observatory, NIH (R01HG009276, 1R01- HG009761, 1R01- MH110049, and 1DP1-HL141201), New York Stem Cell Foundation, Simons Foundation, Paul G. Allen Family Foundation, Vallee Foundation, the Poitras Center for Affective Disorders Research at MIT, the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, J. and P. Poitras, and R. Metcalfe. 

The authors have applied for a patent on this technology.