McGovern neuroscientists identify key role of language gene

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

The research was funded by the Nancy Lurie Marks Family Foundation, the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Wellcome Trust, the Fondation pour la Recherche Médicale and the Max Planck Society.

MEG matters

Somewhere nearby, most likely, sits a coffee mug. Give it a glance. An image of that mug travels from desktop to retina and into the brain, where it is processed, categorized and recognized, within a fraction of a second.

All this feels effortless to us, but programming a computer to do the same reveals just how complex that process is. Computers can handle simple objects in expected positions, such as an upright mug. But tilt that cup on its side? “That messes up a lot of standard computer vision algorithms,” says Leyla Isik, a graduate student in Tomaso Poggio’s lab at the McGovern Institute.

For her thesis research, Isik is working to build better computer vision models, inspired by how human brains recognize objects. But to track this process, she needed an imaging tool that could keep up with the brain’s astonishing speed. In 2011, soon after Isik arrived at MIT, the McGovern Institute opened its magnetoencephalography (MEG) lab, one of only a few dozens in the entire country. MEG operates on the same timescale as the human brain. Now, with easy access to a MEG facility dedicated to brain research, neuroscientists at McGovern and across MIT—even those like Isik who had never scanned human subjects—are delving into human neural processing in ways never possible before.

The making of…

MEG was developed at MIT in the early 1970s by physicist David Cohen. He was searching for the tiny magnetic fields that were predicted to arise within electrically active tissues such as the brain. Magnetic fields can travel unimpeded through the skull, so Cohen hoped it might be possible to detect them noninvasively. Because the signals are so small—a billion times weaker than the magnetic field of the Earth—Cohen experimented with a newly invented device called a SQUID (short for superconducting quantum interference device), a highly sensitive magnetometer. In 1972, he succeeded in recording alpha waves, brain rhythms that occur when the eyes close. The recording scratched out on yellow graph paper with notes scrawled in the margins, led to a seminal paper that launched a new field. Cohen’s prototype has now evolved into a sophisticated machine with an array of 306 SQUID detectors contained within a helmet that sits over the subject’s head like a giant hairdryer.

As MEG technology advanced, neuroscientists watched with growing interest. Animal studies were revealing the importance of high-frequency electrical oscillations such as gamma waves, which appear to have a key role in the communication between different brain regions. But apart from occasional neurosurgery patients, it was very difficult to study these signals in the human brain or to understand how they might contribute to human cognition. The most widely used imaging method, functional magnetic resonance imaging (fMRI) could provide precise spatial localization, but it could not detect events on the necessary millisecond timescale. “We needed to bridge that gap,” says Robert Desimone, director of the McGovern Institute.

Desimone decided to make MEG a priority, and with support from donors including Thomas F. Peterson, Jr., Edward and Kay Poitras, and the Simons Foundation, the institute was able to purchase a Triux scanner from Elekta, the newest model on the market and the first to be installed in North America.

One challenge was the high level of magnetic background noise from the surrounding environment, and so the new scanner was installed in a 13-ton shielded room that deflects interference away from the scanner. “We have a challenging location, but we were able to work with it and to get clear signals,” says Desimone.

“An engineer might have picked a different site, but we cannot overstate the importance of having MEG right here, next to the MRI scanners and easily accessible for our researchers.”

To run the new lab, Desimone recruited Dimitrios Pantazis, an expert in MEG signal processing from the University of Southern California. Pantazis knew a lot about MEG data analysis, but he had never actually scanned human subjects himself. In March 2011, he watched in anticipation as Elekta engineers uncrated the new system. Within a few months, he had the lab up and running.

Computer vision quest

When the MEG lab opened, Isik attended a training session. Like Pantazis, she had no previous experience scanning human subjects, but MEG seemed an ideal tool for teasing out the complexities of human object recognition.

She recorded the brain activity of volunteers as they viewed images of objects in various orientations. She also asked them to track the color of a cross on each image, partly to keep their eyes on the screen and partly to keep them alert. “It’s a dark and quiet room and a comfy chair,” she says. “You have to give them something to do to keep them awake.”

To process the data, Isik used a computational tool called a machine learning classifier, which learns to recognize patterns of brain activity evoked by different stimuli. By comparing responses to different types of objects, or similar objects from different viewpoints (such as a cup lying on its side), she was able to show that the human visual system processes objects in stages, starting with the specific view and then generalizing to features that are independent of the size and position of the object.

Isik is now working to develop a computer model that simulates this step-wise processing. “Having this data to work with helps ground my models,” she says. Meanwhile, Pantazis was impressed by the power of machine learning classifiers to make sense of the huge quantities of data produced by MEG studies. With support from the National Science Foundation, he is working to incorporate them into a software analysis package that is widely used by the MEG community.

Mixology

Because fMRI and MEG provide complementary information, it was natural that researchers would want to combine them. This is a computationally challenging task, but MIT research scientist Aude Oliva and postdoc Radoslaw Cichy, in collaboration with Pantazis, have developed a new way to do so. They presented 92 images to volunteers subjects, once in the MEG scanner, and then again in the MRI scanner across the hall. For each data set, they looked for patterns of similarity between responses to different stimuli. Then, by aligning the two ‘similarity maps,’ they could determine which MEG signals correspond to which fMRI signals, providing information about the location and timing of brain activity that could not be revealed by either method in isolation. “We could see how visual information flows from the rear of the brain to the more anterior regions where objects are recognized and categorized,” says Pantazis. “It all happens within a few hundred milliseconds. You could not see this level of detail without the combination of fMRI and MEG.”

Another study combining fMRI and MEG data focused on attention, a longstanding research interest for Desimone. Daniel Baldauf, a postdoc in Desimone’s lab, shares that fascination. “Our visual experience is amazingly rich,” says Baldauf. “Most mysteries about how we deal with all this information boil down to attention.”

Baldauf set out to study how the brain switches attention between two well-studied object categories, faces and houses. These stimuli are known to be processed by different brain areas, and Baldauf wanted to understand how signals might be routed to one area or the other during shifts of attention. By scanning subjects with MEG and fMRI, Baldauf identified a brain region, the inferior frontal junction (IFJ), that synchronizes its gamma oscillations with either the face or house areas depending on which stimulus the subject was attending to—akin to tuning a radio to a particular station.

Having found a way to trace attention within the brain, Desimone and his colleagues are now testing whether MEG can be used to improve attention. Together with Baldauf and two visiting students, Yasaman Bagherzadeh and Ben Lu, he has rigged the scanner so that subjects can be given feedback on their own activity on a screen in real time as it is being recorded. “By concentrating on a task, participants can learn to steer their own brain activity,” says Baldauf, who hopes to determine whether these exercises can help people perform better on everyday tasks that require attention.

Comfort zone

In addition to exploring basic questions about brain function, MEG is also a valuable tool for studying brain disorders such as autism. Margaret Kjelgaard, a clinical researcher at Massachusetts General Hospital, is collaborating with MIT faculty member Pawan Sinha to understand why people with autism often have trouble tolerating sounds, smells, and lights. This is difficult to study using fMRI, because subjects are often unable to tolerate the noise of the scanner, whereas they find MEG much more comfortable.

“Big things are probably going to happen here.”
— David Cohen, inventor of MEG technology

In the scanner, subjects listened to brief repetitive sounds as their brain responses were recorded. In healthy controls, the responses became weaker with repetition as the subjects adapted to the sounds. Those with autism, however, did not adapt. The results are still preliminary and as-yet unpublished, but Kjelgaard hopes that the work will lead to a biomarker for autism, and perhaps eventually for other disorders. In 2012, the McGovern Institute organized a symposium to mark the opening of the new lab. Cohen, who had invented MEG forty years earlier, spoke at the event and made a prediction: “Big things are probably going to happen here.” Two years on, researchers have pioneered new MEG data analysis techniques, invented novel ways to combine MEG and fMRI, and begun to explore the neural underpinnings of autism. Odds are, there are more big things to come.

Try, try again? Study says no

When it comes to learning languages, adults and children have different strengths. Adults excel at absorbing the vocabulary needed to navigate a grocery store or order food in a restaurant, but children have an uncanny ability to pick up on subtle nuances of language that often elude adults. Within months of living in a foreign country, a young child may speak a second language like a native speaker.

Brain structure plays an important role in this “sensitive period” for learning language, which is believed to end around adolescence. The young brain is equipped with neural circuits that can analyze sounds and build a coherent set of rules for constructing words and sentences out of those sounds. Once these language structures are established, it’s difficult to build another one for a new language.

In a new study, a team of neuroscientists and psychologists led by Amy Finn, a postdoc at MIT’s McGovern Institute for Brain Research, has found evidence for another factor that contributes to adults’ language difficulties: When learning certain elements of language, adults’ more highly developed cognitive skills actually get in the way. The researchers discovered that the harder adults tried to learn an artificial language, the worse they were at deciphering the language’s morphology — the structure and deployment of linguistic units such as root words, suffixes, and prefixes.

“We found that effort helps you in most situations, for things like figuring out what the units of language that you need to know are, and basic ordering of elements. But when trying to learn morphology, at least in this artificial language we created, it’s actually worse when you try,” Finn says.

Finn and colleagues from the University of California at Santa Barbara, Stanford University, and the University of British Columbia describe their findings in the July 21 issue of PLoS One. Carla Hudson Kam, an associate professor of linguistics at British Columbia, is the paper’s senior author.

Too much brainpower

Linguists have known for decades that children are skilled at absorbing certain tricky elements of language, such as irregular past participles (examples of which, in English, include “gone” and “been”) or complicated verb tenses like the subjunctive.

“Children will ultimately perform better than adults in terms of their command of the grammar and the structural components of language — some of the more idiosyncratic, difficult-to-articulate aspects of language that even most native speakers don’t have conscious awareness of,” Finn says.

In 1990, linguist Elissa Newport hypothesized that adults have trouble learning those nuances because they try to analyze too much information at once. Adults have a much more highly developed prefrontal cortex than children, and they tend to throw all of that brainpower at learning a second language. This high-powered processing may actually interfere with certain elements of learning language.

“It’s an idea that’s been around for a long time, but there hasn’t been any data that experimentally show that it’s true,” Finn says.

Finn and her colleagues designed an experiment to test whether exerting more effort would help or hinder success. First, they created nine nonsense words, each with two syllables. Each word fell into one of three categories (A, B, and C), defined by the order of consonant and vowel sounds.

Study subjects listened to the artificial language for about 10 minutes. One group of subjects was told not to overanalyze what they heard, but not to tune it out either. To help them not overthink the language, they were given the option of completing a puzzle or coloring while they listened. The other group was told to try to identify the words they were hearing.

Each group heard the same recording, which was a series of three-word sequences — first a word from category A, then one from category B, then category C — with no pauses between words. Previous studies have shown that adults, babies, and even monkeys can parse this kind of information into word units, a task known as word segmentation.

Subjects from both groups were successful at word segmentation, although the group that tried harder performed a little better. Both groups also performed well in a task called word ordering, which required subjects to choose between a correct word sequence (ABC) and an incorrect sequence (such as ACB) of words they had previously heard.

The final test measured skill in identifying the language’s morphology. The researchers played a three-word sequence that included a word the subjects had not heard before, but which fit into one of the three categories. When asked to judge whether this new word was in the correct location, the subjects who had been asked to pay closer attention to the original word stream performed much worse than those who had listened more passively.

Turning off effort

The findings support a theory of language acquisition that suggests that some parts of language are learned through procedural memory, while others are learned through declarative memory. Under this theory, declarative memory, which stores knowledge and facts, would be more useful for learning vocabulary and certain rules of grammar. Procedural memory, which guides tasks we perform without conscious awareness of how we learned them, would be more useful for learning subtle rules related to language morphology.

“It’s likely to be the procedural memory system that’s really important for learning these difficult morphological aspects of language. In fact, when you use the declarative memory system, it doesn’t help you, it harms you,” Finn says.

Still unresolved is the question of whether adults can overcome this language-learning obstacle. Finn says she does not have a good answer yet but she is now testing the effects of “turning off” the adult prefrontal cortex using a technique called transcranial magnetic stimulation. Other interventions she plans to study include distracting the prefrontal cortex by forcing it to perform other tasks while language is heard, and treating subjects with drugs that impair activity in that brain region.

The research was funded by the National Institute of Child Health and Human Development and the National Science Foundation.

Feng Zhang shares Gabbay Award for CRISPR research

Feng Zhang of MIT and the Broad Institute, Jennifer Doudna of the University of California, Berkeley and the Howard Hughes Medical Institute, and Emmanuelle Charpentier of Umeå University have been awarded Brandeis University’s 17th Annual Jacob Heskel Gabbay Award in Biotechnology and Medicine.

The researchers are being honored for their work on the CRISPR/cas system, a genome editing technology that allows scientists to make precise changes to a DNA sequence — an advance that is expected to transform many areas of biomedical research and may ultimately form the basis of new treatments for human genetic disease.