How the brain generates rhythmic behavior

Many of our bodily functions, such as walking, breathing, and chewing, are controlled by brain circuits called central oscillators, which generate rhythmic firing patterns that regulate these behaviors.

MIT neuroscientists have now discovered the neuronal identity and mechanism underlying one of these circuits: an oscillator that controls the rhythmic back-and-forth sweeping of tactile whiskers, or whisking, in mice. This is the first time that any such oscillator has been fully characterized in mammals.

The MIT team found that the whisking oscillator consists of a population of inhibitory neurons in the brainstem that fires rhythmic bursts during whisking. As each neuron fires, it also inhibits some of the other neurons in the network, allowing the overall population to generate a synchronous rhythm that retracts the whiskers from their protracted positions.

“We have defined a mammalian oscillator molecularly, electrophysiologically, functionally, and mechanistically,” says Fan Wang, an MIT professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s very exciting to see a clearly defined circuit and mechanism of how rhythm is generated in a mammal.”

Wang is the senior author of the study, which appears today in Nature. The lead authors of the paper are MIT research scientists Jun Takatoh and Vincent Prevosto.

Rhythmic behavior

Most of the research that clearly identified central oscillator circuits has been done in invertebrates. For example, Eve Marder’s lab at Brandeis University found cells in the stomatogastric ganglion in lobsters and crabs that generate oscillatory activity to control rhythmic motion of the digestive tract.

Characterizing oscillators in mammals, especially in awake behaving animals, has proven to be highly challenging. The oscillator that controls walking is believed to be distributed throughout the spinal cord, making it difficult to precisely identify the neurons and circuits involved. The oscillator that generates rhythmic breathing is located in a part of the brain stem called the pre-Bötzinger complex, but the exact identity of the oscillator neurons is not fully understood.

“There haven’t been detailed studies in awake behaving animals, where one can record from molecularly identified oscillator cells and manipulate them in a precise way,” Wang says.

Whisking is a prominent rhythmic exploratory behavior in many mammals, which use their tactile whiskers to detect objects and sense textures. In mice, whiskers extend and retract at a frequency of about 12 cycles per second. Several years ago, Wang’s lab set out try to identify the cells and the mechanism that control this oscillation.

To find the location of the whisking oscillator, the researchers traced back from the motor neurons that innervate whisker muscles. Using a modified rabies virus that infects axons, the researchers were able to label a group of cells presynaptic to these motor neurons in a part of the brainstem called the vibrissa intermediate reticular nucleus (vIRt). This finding was consistent with previous studies showing that damage to this part of the brain eliminates whisking.

The researchers then found that about half of these vIRt neurons express a protein called parvalbumin, and that this subpopulation of cells drives the rhythmic motion of the whiskers. When these neurons are silenced, whisking activity is abolished.

Next, the researchers recorded electrical activity from these parvalbumin-expressing vIRt neurons in brainstem in awake mice, a technically challenging task, and found that these neurons indeed have bursts of activity only during the whisker retraction period. Because these neurons provide inhibitory synaptic inputs to whisker motor neurons, it follows that rhythmic whisking is generated by a constant motor neuron protraction signal interrupted by the rhythmic retraction signal from these oscillator cells.

“That was a super satisfying and rewarding moment, to see that these cells are indeed the oscillator cells, because they fire rhythmically, they fire in the retraction phase, and they’re inhibitory neurons,” Wang says.

A maximum projection image showing tracked whiskers on the mouse muzzle. The right (control) side shows the back-and-forth rhythmic sweeping of the whiskers, while the experimental side where the whisking oscillator neurons are silenced, the whiskers move very little. Image: Wang Lab

“New principles”

The oscillatory bursting pattern of vIRt cells is initiated at the start of whisking. When the whiskers are not moving, these neurons fire continuously. When the researchers blocked vIRt neurons from inhibiting each other, the rhythm disappeared, and instead the oscillator neurons simply increased their rate of continuous firing.

This type of network, known as recurrent inhibitory network, differs from the types of oscillators that have been seen in the stomatogastric neurons in lobsters, in which neurons intrinsically generate their own rhythm.

“Now we have found a mammalian network oscillator that is formed by all inhibitory neurons,” Wang says.

The MIT scientists also collaborated with a team of theorists led by David Golomb at Ben-Gurion University, Israel, and David Kleinfeld at the University of California at San Diego. The theorists created a detailed computational model outlining how whisking is controlled, which fits well with all experimental data. A paper describing that model is appearing in an upcoming issue of Neuron.

Wang’s lab now plans to investigate other types of oscillatory circuits in mice, including those that control chewing and licking.

“We are very excited to find oscillators of these feeding behaviors and compare and contrast to the whisking oscillator, because they are all in the brain stem, and we want to know whether there’s some common theme or if there are many different ways to generate oscillators,” she says.

The research was funded by the National Institutes of Health.

Microscopy technique reveals hidden nanostructures in cells and tissues

Press Mentions

Inside a living cell, proteins and other molecules are often tightly packed together. These dense clusters can be difficult to image because the fluorescent labels used to make them visible can’t wedge themselves in between the molecules.

MIT researchers have now developed a novel way to overcome this limitation and make those “invisible” molecules visible. Their technique allows them to “de-crowd” the molecules by expanding a cell or tissue sample before labeling the molecules, which makes the molecules more accessible to fluorescent tags.

This method, which builds on a widely used technique known as expansion microscopy previously developed at MIT, should allow scientists to visualize molecules and cellular structures that have never been seen before.

“It’s becoming clear that the expansion process will reveal many new biological discoveries. If biologists and clinicians have been studying a protein in the brain or another biological specimen, and they’re labeling it the regular way, they might be missing entire categories of phenomena,” says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, a Howard Hughes Medical Institute investigator, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

Using this technique, Boyden and his colleagues showed that they could image a nanostructure found in the synapses of neurons. They also imaged the structure of Alzheimer’s-linked amyloid beta plaques in greater detail than has been possible before.

“Our technology, which we named expansion revealing, enables visualization of these nanostructures, which previously remained hidden, using hardware easily available in academic labs,” says Deblina Sarkar, an assistant professor in the Media Lab and one of the lead authors of the study.

The senior authors of the study are Boyden; Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory; and Thomas Blanpied, a professor of physiology at the University of Maryland. Other lead authors include Jinyoung Kang, an MIT postdoc, and Asmamaw Wassie, a recent MIT PhD recipient. The study appears today in Nature Biomedical Engineering.

De-crowding

Imaging a specific protein or other molecule inside a cell requires labeling it with a fluorescent tag carried by an antibody that binds to the target. Antibodies are about 10 nanometers long, while typical cellular proteins are usually about 2 to 5 nanometers in diameter, so if the target proteins are too densely packed, the antibodies can’t get to them.

This has been an obstacle to traditional imaging and also to the original version of expansion microscopy, which Boyden first developed in 2015. In the original version of expansion microscopy, researchers attached fluorescent labels to molecules of interest before they expanded the tissue. The labeling was done first, in part because the researchers had to use an enzyme to chop up proteins in the sample so the tissue could be expanded. This meant that the proteins couldn’t be labeled after the tissue was expanded.

To overcome that obstacle, the researchers had to find a way to expand the tissue while leaving the proteins intact. They used heat instead of enzymes to soften the tissue, allowing the tissue to expand 20-fold without being destroyed. Then, the separated proteins could be labeled with fluorescent tags after expansion.

With so many more proteins accessible for labeling, the researchers were able to identify tiny cellular structures within synapses, the connections between neurons that are densely packed with proteins. They labeled and imaged seven different synaptic proteins, which allowed them to visualize, in detail, “nanocolumns” consisting of calcium channels aligned with other synaptic proteins. These nanocolumns, which are believed to help make synaptic communication more efficient, were first discovered by Blanpied’s lab in 2016.

“This technology can be used to answer a lot of biological questions about dysfunction in synaptic proteins, which are involved in neurodegenerative diseases,” Kang says. “Until now there has been no tool to visualize synapses very well.”

New patterns

The researchers also used their new technique to image beta amyloid, a peptide that forms plaques in the brains of Alzheimer’s patients. Using brain tissue from mice, the researchers found that amyloid beta forms periodic nanoclusters, which had not been seen before. These clusters of amyloid beta also include potassium channels. The researchers also found amyloid beta molecules that formed helical structures along axons.

“In this paper, we don’t speculate as to what that biology might mean, but we show that it exists. That is just one example of the new patterns that we can see,” says Margaret Schroeder, an MIT graduate student who is also an author of the paper.

Sarkar says that she is fascinated by the nanoscale biomolecular patterns that this technology unveils. “With a background in nanoelectronics, I have developed electronic chips that require extremely precise alignment, in the nanofab. But when I see that in our brain Mother Nature has arranged biomolecules with such nanoscale precision, that really blows my mind,” she says.

Boyden and his group members are now working with other labs to study cellular structures such as protein aggregates linked to Parkinson’s and other diseases. In other projects, they are studying pathogens that infect cells and molecules that are involved in aging in the brain. Preliminary results from these studies have also revealed novel structures, Boyden says.

“Time and time again, you see things that are truly shocking,” he says. “It shows us how much we are missing with classical unexpanded staining.”

The researchers are also working on modifying the technique so they can image up to 20 proteins at a time. They are also working on adapting their process so that it can be used on human tissue samples.

Sarkar and her team, on the other hand, are developing tiny wirelessly powered nanoelectronic devices which could be distributed in the brain. They plan to integrate these devices with expansion revealing. “This can combine the intelligence of nanoelectronics with the nanoscopy prowess of expansion technology, for an integrated functional and structural understanding of the brain,” Sarkar says.

The research was funded by the National Institutes of Health, the National Science Foundation, the Ludwig Family Foundation, the JPB Foundation, the Open Philanthropy Project, John Doerr, Lisa Yang and the Tan-Yang Center for Autism Research at MIT, the U.S. Army Research Office, Charles Hieken, Tom Stocky, Kathleen Octavio, Lore McGovern, Good Ventures, and HHMI.

These neurons have food on the brain

A gooey slice of pizza. A pile of crispy French fries. Ice cream dripping down a cone on a hot summer day. When you look at any of these foods, a specialized part of your visual cortex lights up, according to a new study from MIT neuroscientists.

This newly discovered population of food-responsive neurons is located in the ventral visual stream, alongside populations that respond specifically to faces, bodies, places, and words. The unexpected finding may reflect the special significance of food in human culture, the researchers say.

“Food is central to human social interactions and cultural practices. It’s not just sustenance,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience and a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines. “Food is core to so many elements of our cultural identity, religious practice, and social interactions, and many other things that humans do.”

The findings, based on an analysis of a large public database of human brain responses to a set of 10,000 images, raise many additional questions about how and why this neural population develops. In future studies, the researchers hope to explore how people’s responses to certain foods might differ depending on their likes and dislikes, or their familiarity with certain types of food.

MIT postdoc Meenakshi Khosla is the lead author of the paper, along with MIT research scientist N. Apurva Ratan Murty. The study appears today in the journal Current Biology.

Visual categories

More than 20 years ago, while studying the ventral visual stream, the part of the brain that recognizes objects, Kanwisher discovered cortical regions that respond selectively to faces. Later, she and other scientists discovered other regions that respond selectively to places, bodies, or words. Most of those areas were discovered when researchers specifically set out to look for them. However, that hypothesis-driven approach can limit what you end up finding, Kanwisher says.

“There could be other things that we might not think to look for,” she says. “And even when we find something, how do we know that that’s actually part of the basic dominant structure of that pathway, and not something we found just because we were looking for it?”

To try to uncover the fundamental structure of the ventral visual stream, Kanwisher and Khosla decided to analyze a large, publicly available dataset of full-brain functional magnetic resonance imaging (fMRI) responses from eight human subjects as they viewed thousands of images.

“We wanted to see when we apply a data-driven, hypothesis-free strategy, what kinds of selectivities pop up, and whether those are consistent with what had been discovered before. A second goal was to see if we could discover novel selectivities that either haven’t been hypothesized before, or that have remained hidden due to the lower spatial resolution of fMRI data,” Khosla says.

To do that, the researchers applied a mathematical method that allows them to discover neural populations that can’t be identified from traditional fMRI data. An fMRI image is made up of many voxels — three-dimensional units that represent a cube of brain tissue. Each voxel contains hundreds of thousands of neurons, and if some of those neurons belong to smaller populations that respond to one type of visual input, their responses may be drowned out by other populations within the same voxel.

The new analytical method, which Kanwisher’s lab has previously used on fMRI data from the auditory cortex, can tease out responses of neural populations within each voxel of fMRI data.

Using this approach, the researchers found four populations that corresponded to previously identified clusters that respond to faces, places, bodies, and words. “That tells us that this method works, and it tells us that the things that we found before are not just obscure properties of that pathway, but major, dominant properties,” Kanwisher says.

Intriguingly, a fifth population also emerged, and this one appeared to be selective for images of food.

“We were first quite puzzled by this because food is not a visually homogenous category,” Khosla says. “Things like apples and corn and pasta all look so unlike each other, yet we found a single population that responds similarly to all these diverse food items.”

The food-specific population, which the researchers call the ventral food component (VFC), appears to be spread across two clusters of neurons, located on either side of the FFA. The fact that the food-specific populations are spread out between other category-specific populations may help explain why they have not been seen before, the researchers say.

“We think that food selectivity had been harder to characterize before because the populations that are selective for food are intermingled with other nearby populations that have distinct responses to other stimulus attributes. The low spatial resolution of fMRI prevents us from seeing this selectivity because the responses of different neural population get mixed in a voxel,” Khosla says.

“The technique which the researchers used to identify category-sensitive cells or areas is impressive, and it recovered known category-sensitive systems, making the food category findings most impressive,” says Paul Rozin, a professor of psychology at the University of Pennsylvania, who was not involved in the study. “I can’t imagine a way for the brain to reliably identify the diversity of foods based on sensory features. That makes this all the more fascinating, and likely to clue us in about something really new.”

Food vs non-food

The researchers also used the data to train a computational model of the VFC, based on previous models Murty had developed for the brain’s face and place recognition areas. This allowed the researchers to run additional experiments and predict the responses of the VFC. In one experiment, they fed the model matched images of food and non-food items that looked very similar — for example, a banana and a yellow crescent moon.

“Those matched stimuli have very similar visual properties, but the main attribute in which they differ is edible versus inedible,” Khosla says. “We could feed those arbitrary stimuli through the predictive model and see whether it would still respond more to food than non-food, without having to collect the fMRI data.”

They could also use the computational model to analyze much larger datasets, consisting of millions of images. Those simulations helped to confirm that the VFC is highly selective for images of food.

From their analysis of the human fMRI data, the researchers found that in some subjects, the VFC responded slightly more to processed foods such as pizza than unprocessed foods like apples. In the future they hope to explore how factors such as familiarity and like or dislike of a particular food might affect individuals’ responses to that food.

They also hope to study when and how this region becomes specialized during early childhood, and what other parts of the brain it communicates with. Another question is whether this food-selective population will be seen in other animals such as monkeys, who do not attach the cultural significance to food that humans do.

The research was funded by the National Institutes of Health, the National Eye Institute, and the National Science Foundation through the MIT Center for Brains, Minds, and Machines.

MIT scientists discover new antiviral defense system in bacteria

Bacteria use a variety of defense strategies to fight off viral infection, and some of these systems have led to groundbreaking technologies, such as CRISPR-based gene-editing. Scientists predict there are many more antiviral weapons yet to be found in the microbial world.

A team led by researchers at the Broad Institute of MIT and Harvard and the McGovern Institute for Brain Research at MIT has discovered and characterized one of these unexplored microbial defense systems. They found that certain proteins in bacteria and archaea (together known as prokaryotes) detect viruses in surprisingly direct ways, recognizing key parts of the viruses and causing the single-celled organisms to commit suicide to quell the infection within a microbial community. The study is the first time this mechanism has been seen in prokaryotes and shows that organisms across all three domains of life — bacteria, archaea, and eukaryotes (which includes plants and animals) — use pattern recognition of conserved viral proteins to defend against pathogens.

The study appears in Science.

“This work demonstrates a remarkable unity in how pattern recognition occurs across very different organisms,” said senior author Feng Zhang, who is a core institute member at the Broad, the James and Patricia Poitras Professor of Neuroscience at MIT, a professor of brain and cognitive sciences and biological engineering at MIT, and an investigator at MIT’s McGovern Institute and the Howard Hughes Medical Institute. “It’s been very exciting to integrate genetics, bioinformatics, biochemistry, and structural biology approaches in one study to understand this fascinating molecular system.”

Microbial armory

In an earlier study, the researchers scanned data on the DNA sequences of hundreds of thousands of bacteria and archaea, which revealed several thousand genes harboring signatures of microbial defense. In the new study, they homed in on a handful of these genes encoding enzymes that are members of the STAND ATPase family of proteins, which in eukaryotes are involved in the innate immune response.

In humans and plants, the STAND ATPase proteins fight infection by recognizing patterns in a pathogen itself or in the cell’s response to infection. In the new study, the researchers wanted to know if the proteins work the same way in prokaryotes to defend against infection. The team chose a few STAND ATPase genes from the earlier study, delivered them to bacterial cells, and challenged those cells with bacteriophage viruses. The cells underwent a dramatic defensive response and survived.

The scientists next wondered which part of the bacteriophage triggers that response, so they delivered viral genes to the bacteria one at a time. Two viral proteins elicited an immune response: the portal, a part of the virus’s capsid shell, which contains viral DNA; and the terminase, the molecular motor that helps assemble the virus by pushing the viral DNA into the capsid. Each of these viral proteins activated a different STAND ATPase to protect the cell.

The finding was striking and unprecedented. Most known bacterial defense systems work by sensing viral DNA or RNA, or cellular stress due to the infection. These bacterial proteins were instead directly sensing key parts of the virus.

The team next showed that bacterial STAND ATPase proteins could recognize diverse portal and terminase proteins from different phages. “It’s surprising that bacteria have these highly versatile sensors that can recognize all sorts of different phage threats that they might encounter,” said co-first author Linyi Gao, a junior fellow in the Harvard Society of Fellows and a former graduate student in the Zhang lab.

Structural analysis

For a detailed look at how the microbial STAND ATPases detect the viral proteins, the researchers used cryo-electron microscopy to examine their molecular structure when bound to the viral proteins. “By analyzing the structure, we were able to precisely answer a lot of the questions about how these things actually work,” said co-first author Max Wilkinson, a postdoctoral researcher in the Zhang lab.

The team saw that the portal or terminase protein from the virus fits within a pocket in the STAND ATPase protein, with each STAND ATPase protein grasping one viral protein. The STAND ATPase proteins then group together in sets of four known as tetramers, which brings together key parts of the bacterial proteins called effector domains. This activates the proteins’ endonuclease function, shredding cellular DNA and killing the cell.

The tetramers bound viral proteins from other bacteriophages just as tightly, demonstrating that the STAND ATPases sense the viral proteins’ three-dimensional shape, rather than their sequence. This helps explain how one STAND ATPase can recognize dozens of different viral proteins. “Regardless of sequence, they all fit like a hand in a glove,” said Wilkinson.

STAND ATPases in humans and plants also work by forming multi-unit complexes that activate specific functions in the cell. “That’s the most exciting part of this work,” said Strecker. “To see this across the domains of life is unprecedented.”

The research was funded in part by the National Institutes of Health, the Howard Hughes Medical Institute, Open Philanthropy, the Edward Mallinckrodt, Jr. Foundation, the Poitras Center for Psychiatric Disorders Research, the Hock E. Tan and K. Lisa Yang Center for Autism Research, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, the Phillips family, J. and P. Poitras, and the BT Charitable Foundation.

Why do we dream?

As part of our Ask the Brain series, science writer Shafaq Zia answers the question, “Why do we dream?”

_____

One night, Albert Einstein dreamt that he was walking through a farm where he found a herd of cows against an electric fence. When the farmer switched on the fence, the cows suddenly jumped back, all at the same time. But to the farmer’s eyes, who was standing at the other end of the field, they seemed to have jumped one after another, in a wave formation. Einstein woke up and the Theory of Relativity was born.

Dreaming is one of the oldest biological phenomena; for as long as humans have slept, they’ve dreamt. But through most of our history, dreams have remained mystified, leaving scientists, philosophers, and artists alike searching for meaning.

In many aboriginal cultures, such as the Esa Eja community in Peruvian Amazon, dreaming is a sacred practice for gaining knowledge, or solving a problem, through the dream narrative. But in the last century or so, technological advancements have allowed neuroscientists to take up dreams as a matter of scientific inquiry in order to answer a much-pondered question — what is the purpose of dreaming?

Falling asleep

The human brain is a fascinating place. It is composed of approximately 80 billion neurons and it is their combined electrical chatter that generates oscillations known as brain waves. There are five types of brain waves —  alpha, beta, theta, delta, and gamma — that each indicate a different state between sleep and wakefulness.

Using EEG, a test that records electrical activity in the brain, scientists have identified that when we’re awake, our brain emits beta and gamma waves. These tend to have a stimulating effect and help us remain actively engaged in mental activities.

The differently named frequency bands of neural oscillations, or brainwaves: delta, theta, alpha, beta, and gamma.

But during the transition to sleep, the number of beta waves lowers significantly and the brain produces high levels of alpha waves. These waves regulate attention and help filter out distractions. A recent study led by McGovern Institute Director Robert Desimone, showed that people can actually enhance their attention by controlling their own alpha brain waves using neurofeedback training. It’s unknown how long these effects might last and whether this kind of control could be achieved with other types of brain waves, but the researchers are now planning additional studies to explore these questions.

Alpha waves are also produced when we daydream, meditate, or listen to the sound of rain. As our minds wander, many parts of the brain are engaged, including a specialized system called the “default mode network.” Disturbances in this network, explains Susan Whitfield-Gabrieli, a professor of psychology at Northeastern University and a McGovern Institute research affiliate, have been linked to various brain disorders including schizophrenia, depression and ADHD. By identifying the brain circuits associated with mind wandering, she says, we can begin to develop better treatment options for people suffering from these disorders.

Finally, as we enter a dreamlike state, the prefrontal cortex of the brain, responsible for keeping impulses in check, slowly grows less active. This is when there’s a spur in theta waves that leads to an unconstrained window of consciousness; there is little censorship from the mind, allowing for visceral dreams and creative thoughts.

The dreaming brain

“Every time you learn something, it happens so quickly,” said Dheeraj Roy, postdoctoral fellow in Guoping Feng’s lab at the McGovern Institute. “The brain is continuously recording information, but how do you take a break and then make sense of it all?”

This is where dreams come in, says Roy. During sleep, newly-formed memories are gradually stabilized into a more permanent form of long-term storage in the brain. Dreaming, he says, is influenced by the consolidation of these memories during sleep. Most dreams are made up of experiences, thoughts, emotion, places, and people we have already encountered in our lives. But, during dreaming, bits and pieces of these memories seem to be reorganized to create a particularly bizarre scenario: you’re talking to your sister when it suddenly begins to rain roses and you’re dancing at a New Year’s party.

This re-organization may not be so random; as the brain is processing memories, it pulls together the ones that are seemingly related to each other. Perhaps you dreamt of your sister because you were at a store recently where a candle smelt like her rose-scented perfume, which reminded you of the time you made a new year resolution to spend less money on flowers.

Some brain disorders, like Parkinson’s disease, have been associated with vivid, unpleasant dreams and erratic brain wave patterns. Researchers at the McGovern Institute hope that a better understanding of mechanics of the brain – including neural circuits and brain waves – will help people with Parkinson’s and other brain disorders.

So perhaps dreams aren’t instilled with meaning, symbolism, and wisdom in the way we’ve always imagined, and they simply reflect important biological processes taking place in our brain. But with all that science has uncovered about dreaming and the ways in which it links to creativity and memory, the magical essence of this universal human experience remains untainted.

_____

Do you have a question for The Brain? Ask it here.

Whether speaking Turkish or Norwegian, the brain’s language network looks the same

Over several decades, neuroscientists have created a well-defined map of the brain’s “language network,” or the regions of the brain that are specialized for processing language. Found primarily in the left hemisphere, this network includes regions within Broca’s area, as well as in other parts of the frontal and temporal lobes.

However, the vast majority of those mapping studies have been done in English speakers as they listened to or read English texts. MIT neuroscientists have now performed brain imaging studies of speakers of 45 different languages. The results show that the speakers’ language networks appear to be essentially the same as those of native English speakers.

The findings, while not surprising, establish that the location and key properties of the language network appear to be universal. The work also lays the groundwork for future studies of linguistic elements that would be difficult or impossible to study in English speakers because English doesn’t have those features.

“This study is very foundational, extending some findings from English to a broad range of languages,” says Evelina Fedorenko, the Frederick A. and Carole J. Middleton Career Development Associate Professor of Neuroscience at MIT and a member of MIT’s McGovern Institute for Brain Research. “The hope is that now that we see that the basic properties seem to be general across languages, we can ask about potential differences between languages and language families in how they are implemented in the brain, and we can study phenomena that don’t really exist in English.”

Fedorenko is the senior author of the study, which appears today in Nature Neuroscience. Saima Malik-Moraleda, a PhD student in the Speech and Hearing Bioscience and Technology program at Harvard University, and Dima Ayyash, a former research assistant, are the lead authors of the paper.

Mapping language networks

The precise locations and shapes of language areas differ across individuals, so to find the language network, researchers ask each person to perform a language task while scanning their brains with functional magnetic resonance imaging (fMRI). Listening to or reading sentences in one’s native language should activate the language network. To distinguish this network from other brain regions, researchers also ask participants to perform tasks that should not activate it, such as listening to an unfamiliar language or solving math problems.

Several years ago, Fedorenko began designing these “localizer” tasks for speakers of languages other than English. While most studies of the language network have used English speakers as subjects, English does not include many features commonly seen in other languages. For example, in English, word order tends to be fixed, while in other languages there is more flexibility in how words are ordered. Many of those languages instead use the addition of morphemes, or segments of words, to convey additional meaning and relationships between words.

“There has been growing awareness for many years of the need to look at more languages, if you want make claims about how language works, as opposed to how English works,” Fedorenko says. “We thought it would be useful to develop tools to allow people to rigorously study language processing in the brain in other parts of the world. There’s now access to brain imaging technologies in many countries, but the basic paradigms that you would need to find the language-responsive areas in a person are just not there.”

For the new study, the researchers performed brain imaging of two speakers of 45 different languages, representing 12 different language families. Their goal was to see if key properties of the language network, such as location, left lateralization, and selectivity, were the same in those participants as in people whose native language is English.

The researchers decided to use “Alice in Wonderland” as the text that everyone would listen to, because it is one of the most widely translated works of fiction in the world. They selected 24 short passages and three long passages, each of which was recorded by a native speaker of the language. Each participant also heard nonsensical passages, which should not activate the language network, and was asked to do a variety of other cognitive tasks that should not activate it.

The team found that the language networks of participants in this study were found in approximately the same brain regions, and had the same selectivity, as those of native speakers of English.

“Language areas are selective,” Malik-Moraleda says. “They shouldn’t be responding during other tasks such as a spatial working memory task, and that was what we found across the speakers of 45 languages that we tested.”

Additionally, language regions that are typically activated together in English speakers, such as the frontal language areas and temporal language areas, were similarly synchronized in speakers of other languages.

The researchers also showed that among all of the subjects, the small amount of variation they saw between individuals who speak different languages was the same as the amount of variation that would typically be seen between native English speakers.

Similarities and differences

While the findings suggest that the overall architecture of the language network is similar across speakers of different languages, that doesn’t mean that there are no differences at all, Fedorenko says. As one example, researchers could now look for differences in speakers of languages that predominantly use morphemes, rather than word order, to help determine the meaning of a sentence.

“There are all sorts of interesting questions you can ask about morphological processing that don’t really make sense to ask in English, because it has much less morphology,” Fedorenko says.

Another possibility is studying whether speakers of languages that use differences in tone to convey different word meanings would have a language network with stronger links to auditory brain regions that encode pitch.

Right now, Fedorenko’s lab is working on a study in which they are comparing the ‘temporal receptive fields’ of speakers of six typologically different languages, including Turkish, Mandarin, and Finnish. The temporal receptive field is a measure of how many words the language processing system can handle at a time, and for English, it has been shown to be six to eight words long.

“The language system seems to be working on chunks of just a few words long, and we’re trying to see if this constraint is universal across these other languages that we’re testing,” Fedorenko says.

The researchers are also working on creating language localizer tasks and finding study participants representing additional languages beyond the 45 from this study.

The research was funded by the National Institutes of Health and research funds from MIT’s Department of Brain and Cognitive Sciences, the McGovern Institute, and the Simons Center for the Social Brain. Malik-Moraleda was funded by a la Caixa Fellowship and a Friends of McGovern fellowship.

McGovern Fellows recognized with life sciences innovation award

McGovern Institute Fellows Omar Abudayyeh and Jonathan Gootenberg have been named the inaugural recipients of the Termeer Scholars Awards, which recognize “emerging biomedical researchers that represent the future of the biotechnology industry.” The Termeer Foundation is a nonprofit organization focused on connecting life science innovators and catalyzing the creation of new medicines.

“The Termeer Foundation is committed to championing emerging biotechnology leaders and finding people who want to solve the biggest problems in human health,” said Belinda Termeer, president of the Termeer Foundation. “By supporting researchers like Omar and Jonathan, we plant the seeds for future success in individuals who are preparing to make significant contributions in academia and industry.”

The Abudayyeh-Gootenberg lab is developing a suite of new tools to enable next-generation cellular engineering, with uses in basic research, therapeutics and diagnostics. Building off the revolutionary biology of natural biological systems, including mobile genetic elements and CRISPR systems, the team develops new approaches for understanding and manipulating genomes, transcriptomes and cellular fate. The technologies have broad applications, including in oncology, aging and genetic disease.

These tools have been adopted by researchers over the world and formed the basis for four companies that Abudayyeh and Gootenberg have co-founded. They will receive a $50,000 grant to support professional development, knowledge advancement and/or stakeholder engagement and will become part of The Termeer Foundation’s signature Network of Termeer Fellows (first-time CEOs and entrepreneurs) and Mentors (experienced industry leaders).

“The Termeer Foundation is working to improve the long odds of biotechnology by identifying and supporting future biotech leaders; if we help them succeed as leaders, we can help their innovations reach patients,” said Alan Waltws, co-founder of the Termeer Foundation. “While our Termeer Fellows program has supported first time CEOs and entrepreneurs for the past five years, our new Termeer Scholars program will provide much needed support to the researchers whose innovative ideas represent the future of the biotechnology industry – researchers like Omar and Jonathan.”

Abudayyeh and Gootenberg were honored at the Termeer Foundation’s annual dinner in Boston on June 16, 2022.

Artificial neural networks model face processing in autism

Many of us easily recognize emotions expressed in others’ faces. A smile may mean happiness, while a frown may indicate anger. Autistic people often have a more difficult time with this task. It’s unclear why. But new research, published today in The Journal of Neuroscience, sheds light on the inner workings of the brain to suggest an answer. And it does so using a tool that opens new pathways to modeling the computation in our heads: artificial intelligence.

Researchers have primarily suggested two brain areas where the differences might lie. A region on the side of the primate (including human) brain called the inferior temporal (IT) cortex contributes to facial recognition. Meanwhile, a deeper region called the amygdala receives input from the IT cortex and other sources and helps process emotions.

Kohitij Kar, a research scientist in the lab of MIT Professor James DiCarlo, hoped to zero in on the answer. (DiCarlo, the Peter de Florez Professor in the Department of Brain and Cognitive Sciences, is a member of the McGovern Institute for Brain Research and director of MIT’s Quest for Intelligence.)

Kar began by looking at data provided by two other researchers: Shuo Wang, at Washington University in St. Louis, and Ralph Adolphs, at the California Institute of Technology. In one experiment, they showed images of faces to autistic adults and to neurotypical controls. The images had been generated by software to vary on a spectrum from fearful to happy, and the participants judged, quickly, whether the faces depicted happiness. Compared with controls, autistic adults required higher levels of happiness in the faces to report them as happy.

Modeling the brain

Kar, who is also a member of the Center for Brains, Minds and Machines, trained an artificial neural network, a complex mathematical function inspired by the brain’s architecture, to perform the same task. The network contained layers of units that roughly resemble biological neurons that process visual information. These layers process information as it passes from an input image to a final judgment indicating the probability that the face is happy. Kar found that the network’s behavior more closely matched the neurotypical controls than it did the autistic adults.

The network also served two more interesting functions. First, Kar could dissect it. He stripped off layers and retested its performance, measuring the difference between how well it matched controls and how well it matched autistic adults. This difference was greatest when the output was based on the last network layer. Previous work has shown that this layer in some ways mimics the IT cortex, which sits near the end of the primate brain’s ventral visual processing pipeline. Kar’s results implicate the IT cortex in differentiating neurotypical controls from autistic adults.

The other function is that the network can be used to select images that might be more efficient in autism diagnoses. If the difference between how closely the network matches neurotypical controls versus autistic adults is greater when judging one set of images versus another set of images, the first set could be used in the clinic to detect autistic behavioral traits. “These are promising results,” Kar says. Better models of the brain will come along, “but oftentimes in the clinic, we don’t need to wait for the absolute best product.”

Next, Kar evaluated the role of the amygdala. Again, he used data from Wang and colleagues. They had used electrodes to record the activity of neurons in the amygdala of people undergoing surgery for epilepsy as they performed the face task. The team found that they could predict a person’s judgment based on these neurons’ activity. Kar re-analyzed the data, this time controlling for the ability of the IT-cortex-like network layer to predict whether a face truly was happy. Now, the amygdala provided very little information of its own. Kar concludes that the IT cortex is the driving force behind the amygdala’s role in judging facial emotion.

Noisy networks

Finally, Kar trained separate neural networks to match the judgments of neurotypical controls and autistic adults. He looked at the strengths or “weights” of the connections between the final layers and the decision nodes. The weights in the network matching autistic adults, both the positive or “excitatory” and negative or “inhibitory” weights, were weaker than in the network matching neurotypical controls. This suggests that sensory neural connections in autistic adults might be noisy or inefficient.

To further test the noise hypothesis, which is popular in the field, Kar added various levels of fluctuation to the activity of the final layer in the network modeling autistic adults. Within a certain range, added noise greatly increased the similarity between its performance and that of the autistic adults. Adding noise to the control network did much less to improve its similarity to the control participants. This further suggest that sensory perception in autistic people may be the result of a so-called “noisy” brain.

Computational power

Looking forward, Kar sees several uses for computational models of visual processing. They can be further prodded, providing hypotheses that researchers might test in animal models. “I think facial emotion recognition is just the tip of the iceberg,” Kar says. They can also be used to select or even generate diagnostic content. Artificial intelligence could be used to generate content like movies and educational materials that optimally engages autistic children and adults. One might even tweak facial and other relevant pixels in what autistic people see in augmented reality goggles, work that Kar plans to pursue in the future.

Ultimately, Kar says, the work helps to validate the usefulness of computational models, especially image-processing neural networks. They formalize hypotheses and make them testable. Does one model or another better match behavioral data? “Even if these models are very far off from brains, they are falsifiable, rather than people just making up stories,” he says. “To me, that’s a more powerful version of science.”

Three distinct brain circuits in the thalamus contribute to Parkinson’s symptoms

Parkinson’s disease is best-known as a disorder of movement. Patients often experience tremors, loss of balance, and difficulty initiating movement. The disease also has lesser-known symptoms that are nonmotor, including depression.

In a study of a small region of the thalamus, MIT neuroscientists have now identified three distinct circuits that influence the development of both motor and nonmotor symptoms of Parkinson’s. Furthermore, they found that by manipulating these circuits, they could reverse Parkinson’s symptoms in mice.

The findings suggest that those circuits could be good targets for new drugs that could help combat many of the symptoms of Parkinson’s disease, the researchers say.

“We know that the thalamus is important in Parkinson’s disease, but a key question is how can you put together a circuit that that can explain many different things happening in Parkinson’s disease. Understanding different symptoms at a circuit level can help guide us in the development of better therapeutics,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT.

Feng is the senior author of the study, which appears today in Nature. Ying Zhang, a J. Douglas Tan Postdoctoral Fellow at the McGovern Institute, and Dheeraj Roy, a NIH K99 Awardee and a McGovern Fellow at the Broad Institute, are the lead authors of the paper.

Tracing circuits

The thalamus consists of several different regions that perform a variety of functions. Many of these, including the parafascicular (PF) thalamus, help to control movement. Degeneration of these structures is often seen in patients with Parkinson’s disease, which is thought to contribute to their motor symptoms.

In this study, the MIT team set out to try to trace how the PF thalamus is connected to other brain regions, in hopes of learning more about its functions. They found that neurons of the PF thalamus project to three different parts of the basal ganglia, a cluster of structures involved in motor control and other functions: the caudate putamen (CPu), the subthalamic nucleus (STN), and the nucleus accumbens (NAc).

“We started with showing these different circuits, and we demonstrated that they’re mostly nonoverlapping, which strongly suggests that they have distinct functions,” Roy says.

Further studies revealed those functions. The circuit that projects to the CPu appears to be involved in general locomotion, and functions to dampen movement. When the researchers inhibited this circuit, mice spent more time moving around the cage they were in.

The circuit that extends into the STN, on the other hand, is important for motor learning — the ability to learn a new motor skill through practice. The researchers found that this circuit is necessary for a task in which the mice learn to balance on a rod that spins with increasing speed.

Lastly, the researchers found that, unlike the others, the circuit that connects the PF thalamus to the NAc is not involved in motor activity. Instead, it appears to be linked to motivation. Inhibiting this circuit generates depression-like behaviors in healthy mice, and they will no longer seek a reward such as sugar water.

Druggable targets

Once the researchers established the functions of these three circuits, they decided to explore how they might be affected in Parkinson’s disease. To do that, they used a mouse model of Parkinson’s, in which dopamine-producing neurons in the midbrain are lost.

They found that in this Parkinson’s model, the connection between the PF thalamus and the CPu was enhanced, and that this led to a decrease in overall movement. Additionally, the connections from the PF thalamus to the STN were weakened, which made it more difficult for the mice to learn the accelerating rod task.

Lastly, the researchers showed that in the Parkinson’s model, connections from the PF thalamus to the NAc were also interrupted, and that this led to depression-like symptoms in the mice, including loss of motivation.

Using chemogenetics or optogenetics, which allows them to control neuronal activity with a drug or light, the researchers found that they could manipulate each of these three circuits and in doing so, reverse each set of Parkinson’s symptoms. Then, they decided to look for molecular targets that might be “druggable,” and found that each of the three PF thalamus regions have cells that express different types of cholinergic receptors, which are activated by the neurotransmitter acetylcholine. By blocking or activating those receptors, depending on the circuit, they were also able to reverse the Parkinson’s symptoms.

“We found three distinct cholinergic receptors that can be expressed in these three different PF circuits, and if we use antagonists or agonists to modulate these three different PF populations, we can rescue movement, motor learning, and also depression-like behavior in PD mice,” Zhang says.

Parkinson’s patients are usually treated with L-dopa, a precursor of dopamine. While this drug helps patients regain motor control, it doesn’t help with motor learning or any nonmotor symptoms, and over time, patients become resistant to it.

The researchers hope that the circuits they characterized in this study could be targets for new Parkinson’s therapies. The types of neurons that they identified in the circuits of the mouse brain are also found in the nonhuman primate brain, and the researchers are now using RNA sequencing to find genes that are expressed specifically in those cells.

“RNA-sequencing technology will allow us to do a much more detailed molecular analysis in a cell-type specific way,” Feng says. “There may be better druggable targets in these cells, and once you know the specific cell types you want to modulate, you can identify all kinds of potential targets in them.”

The research was funded, in part, by the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience at MIT, the Stanley Center for Psychiatric Research at the Broad Institute, the James and Patricia Poitras Center for Psychiatric Disorders Research at MIT, the National Institutes of Health BRAIN Initiative, and the National Institute of Mental Health.

Convenience-sized RNA editing

Last year, researchers at MIT’s McGovern Institute discovered and characterized Cas7-11, the first CRISPR enzyme capable of making precise, guided cuts to strands of RNA without harming cells in the process. Now, working with collaborators at the University of Tokyo, the same team has revealed that Cas7-11 can be shrunk to a more compact version, making it an even more viable option for editing the RNA inside living cells. The new, compact Cas7-11 was described today in the journal Cell along with a detailed structural analysis of the original enzyme.

“When we looked at the structure, it was clear there were some pieces that weren’t needed which we could actually remove,” says McGovern Fellow Omar Abudayyeh, who led the new work with McGovern Fellow Jonathan Gootenberg and collaborator Hiroshi Nishimasu from the University of Tokyo. “This makes the enzyme small enough that it fits into a single viral vector for therapeutic applications.”

The authors, who also include postdoctoral researcher Nathan Zhou from the McGovern Institute and Kazuki Kato from the University Tokyo, see the new three-dimensional structure of Cas7-11 as a rich resource toanswer questions about the basic biology of the enzymes and reveal other ways to tweak its function in the future.

Targeting RNA

McGovern Fellows Jonathan Gootenberg and Omar Abudayyeh in their lab. Photo: Caitlin Cunningham

Over the past decade, the CRISPR-Cas9 genome editing technology has given researchers the ability to modify the genes inside human cells—a boon for both basic research and the development of therapeutics to reverse disease-causing genetic mutations. But CRISPR-Cas9 only works to alter DNA, and for some research and clinical purposes, editing RNA is more effective or useful.

A cell retains its DNA for life, and passes an identical copy to daughter cells as it duplicates, so any changes to DNA are relatively permanent. However, RNA is a more transient molecule, transcribed from DNA and degraded not long after.

“There are lots of positives about being able to permanently change DNA, especially when it comes to treating an inherited genetic disease,” Gootenberg says. “But for an infection, an injury or some other temporary disease, being able to temporarily modify a gene through RNA targeting makes more sense.”

Until Abudayyeh, Gootenberg and their colleagues discovered and characterized Cas7-11, the only enzyme that could target RNA had a messy side effect; when it recognized a particular gene, the enzyme—Cas13—began cutting up all the RNA around it. This property makes Cas13 effective for diagnostic tests, where it is used to detect the presence of a piece of RNA, but not very useful for therapeutics, where targeted cuts are required.

The discovery of Cas7-11 opened the doors to a more precise form of RNA editing, analogous to the Cas9 enzyme for DNA. However, the massive Cas7-11 protein was too big to fit inside a single viral vector—the empty shell of a virus that researchers typically use to deliver gene editing machinery into patient’s cells.

Structural insight

To determine the overall structure of Cas7-11, Abudayyeh, Gootenberg and Nishimasu used cryo-electron microscopy, which shines beams of electrons on frozen protein samples and measures how the beams are transmitted. The researchers knew that Cas7-11 was like an amalgamation of five separate Cas enzymes, fused into one single gene, but were not sure exactly how those parts folded and fit together.

“The really fascinating thing about Cas7-11, from a fundamental biology perspective, is that it should be all these separate pieces that come together, but instead you have a fusion into one gene,” Gootenberg says. “We really didn’t know what that would look like.”

The structure of Cas7-11, caught in the act of binding both its target tRNA strand and the guide RNA, which directs that binding, revealed how the pieces assembled and which parts of the protein were critical to recognizing and cutting RNA. This kind of structural insight is critical to figuring out how to make Cas7-11 carry out targeted jobs inside human cells.

The structure also illuminated a section of the protein that wasn’t serving any apparent functional role. This finding suggested the researchers could remove it, re-engineering Cas7-11 to make it smaller without taking away its ability to target RNA. Abudayyeh and Gootenberg tested the impact of removing different bits of this section, resulting in a new compact version of the protein, dubbed Cas7-11S. With Cas7-11S in hand, they packaged the system inside a single viral vector, delivered it into mammalian cells and efficiently targeted RNA.

The team is now planning future studies on other proteins that interact with Cas7-11 in the bacteria that it originates from, and also hopes to continue working towards the use of Cas7-11 for therapeutic applications.

“Imagine you could have an RNA gene therapy, and when you take it, it modifies your RNA, but when you stop taking it, that modification stops,” Abudayyeh says. “This is really just the beginning of enabling that tool set.”

This research was funded, in part, by the McGovern Institute Neurotechnology Program, K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics in Neuroscience, G. Harold & Leila Y. Mathers Charitable Foundation, MIT John W. Jarve (1978) Seed Fund for Science Innovation, FastGrants, Basis for Supporting Innovative Drug Discovery and Life Science Research Program, JSPS KAKENHI, Takeda Medical Research Foundation, and Inamori Research Institute for Science.