Feng Zhang named James and Patricia Poitras Professor in Neuroscience

The McGovern Institute for Brain Research at MIT has announced the appointment of Feng Zhang as the inaugural chairholder of the James and Patricia Poitras (1963) Professorship in Neuroscience. This new endowed professorship was made possible through a generous gift by Patricia and James Poitras ’63. The professorship is the second endowed chair Mr. and Mrs. Poitras have established at MIT, and extends their longtime support for mental health research.

“This newly created chair further enhances all that Jim and Pat have done for mental illness research at MIT,” said Robert Desimone, director of the McGovern Institute. “The Poitras Center for Affective Disorders Research has galvanized psychiatric research in multiple labs at MIT, and this new professorship will grant critical support to Professor Zhang’s genome engineering technologies, which continue to significantly advance mental illness research in labs worldwide.”

James and Patricia Poitras founded the Poitras Center for Affective Disorders Research at MIT in 2007. The Center has enabled dozens of advances in mental illness research, including the development of new disease models and novel technologies. Partnerships between the center and McLean Hospital have also resulted in improved methods for predicting and treating psychiatric disorders. In 2003, the Poitras Family established the James W. (1963) and Patricia T. Poitras Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences, currently held by Guoping Feng.

“Providing support for high-risk, high-reward projects that have the potential to significantly impact individuals living with mental illness has been immensely rewarding to us,” Mr. and Mrs. Poitras say. “We are most interested in bringing basic scientific research to bear on new treatment options for psychiatric diseases. The work of Feng Zhang and his team is immeasurably promising to us and to the field of brain disorders research.”

Zhang joined MIT in 2011 as an investigator in the McGovern Institute for Brain Research and an assistant professor in the departments of Brain and Cognitive Sciences and Biological Engineering. In 2013, he was named the W.M. Keck Career Development Professor in Biomedical Engineering, and in 2016 he was awarded tenure. In addition to his roles at MIT, Zhang is a core member of the Broad Institute of Harvard and MIT.

“I am deeply honored to be named the first James and Patricia Poitras Professor in Neuroscience,” says Zhang. “The Poitras Family and I share a passion for researching, treating, and eventually curing major mental illness. This chair is a terrific recognition of my group’s dedication to advancing genomic and molecular tools to research and one day solve psychiatric illness.”

Zhang earned his BA in chemistry and physics from Harvard College and his PhD in chemistry from Stanford University. Zhang has received numerous awards for his work in genome editing, especially the CRISPR gene editing system, and optogenetics. These include the Perl-UNC Neuroscience Prize, the National Science Foundation’s Alan T. Waterman Award, the Jacob Heskel Gabbay Award in Biotechnology and Medicine, the Society for Neuroscience’s Young Investigator Award, the Okazaki Award, the Canada Gairdner International Award, and the Tang Prize. Zhang is a founder of Editas Medicine, a genome editing company founded by world leaders in the fields of genome editing, protein engineering, and molecular and structural biology.

Neuroscientists get a glimpse into the workings of the baby brain

In adults, certain regions of the brain’s visual cortex respond preferentially to specific types of input, such as faces or objects — but how and when those preferences arise has long puzzled neuroscientists.

One way to help answer that question is to study the brains of very young infants and compare them to adult brains. However, scanning the brains of awake babies in an MRI machine has proven difficult.

Now, neuroscientists at MIT have overcome that obstacle, adapting their MRI scanner to make it easier to scan infants’ brains as the babies watch movies featuring different types of visual input. Using these data, the team found that in some ways, the organization of infants’ brains is surprisingly similar to that of adults. Specifically, brain regions that respond to faces in adults do the same in babies, as do regions that respond to scenes.

“It suggests that there’s a stronger biological predisposition than I would have guessed for specific cortical regions to end up with specific functions,” says Rebecca Saxe, a professor of brain and cognitive sciences and member of MIT’s McGovern Institute for Brain Research.

Saxe is the senior author of the study, which appears in the Jan. 10 issue of Nature Communications. The paper’s lead author is former MIT graduate student Ben Deen, who is now a postdoc at Rockefeller University.

MRI adaptations

Functional MRI (magnetic resonance imaging) is the go-to technique for studying brain function in adults. However, very few researchers have taken on the challenge of trying to scan babies’ brains, especially while they are awake.

“Babies and MRI machines have very different needs,” Saxe points out. “Babies would like to do activities for two or three minutes and then move on. They would like to be sitting in a comfortable position, and in charge of what they’re looking at.”

On the other hand, “MRI machines would like to be loud and dark and have a person show up on schedule, stay still for the entire time, pay attention to one thing for two hours, and follow instructions closely,” she says.

To make the setup more comfortable for babies, the researchers made several modifications to the MRI machine and to their usual experimental protocols. First, they built a special coil (part of the MRI scanner that acts as a radio antenna) that allows the baby to recline in a seat similar to a car seat. A mirror in front of the baby’s face allows him or her to watch videos, and there is space in the machine for a parent or one of the researchers to sit with the baby.

The researchers also made the scanner much less noisy than a typical MRI machine. “It’s quieter than a loud restaurant,” Saxe says. “The baby can hear their parent talking over the sound of the scanner.”

Once the babies, who were 4 to 6 months old, were in the scanner, the researchers played the movies continuously while scanning the babies’ brains. However, they only used data from the time periods when the babies were actively watching the movies. From 26 hours of scanning 17 babies, the researchers obtained four hours of usable data from nine babies.

“The sheer tenacity of this work is truly amazing,” says Charles Nelson, a professor of pediatrics at Boston Children’s Hospital, who was not involved in the research. “The fact that they pulled this off is incredibly novel.”

Obtaining this data allowed the MIT team to study how infants’ brains respond to specific types of sensory input, and to compare their responses with those of adults.

“The big-picture question is, how does the adult brain come to have the structure and function that you see in adulthood? How does it get like that?” Saxe says. “A lot of the answer to that question will depend on having the tools to be able to see the baby brain in action. The more we can see, the more we can ask that kind of question.”

Distinct preferences

The researchers showed the babies videos of either smiling children or outdoor scenes such as a suburban street seen from a moving car. Distinguishing social scenes from the physical environment is one of the main high-level divisions that our brains make when interpreting the world.

“The questions we’re asking are about how you understand and organize your world, with vision as the main modality for getting you into these very different mindsets,” Saxe says. “In adults, there are brain regions that prefer to look at faces and socially relevant things, and brain regions that prefer to look at environments and objects.”

The scans revealed that many regions of the babies’ visual cortex showed the same preferences for scenes or faces seen in adult brains. This suggests that these preferences form within the first few months of life and refutes the hypothesis that it takes years of experience interpreting the world for the brain to develop the responses that it shows in adulthood.

The researchers also found some differences in the way that babies’ brains respond to visual stimuli. One is that they do not seem to have regions found in the adult brain that are “highly selective,” meaning these regions prefer features such as human faces over any other kind of input, including human bodies or the faces of other animals. The babies also showed some differences in their responses when shown examples from four different categories — not just faces and scenes but also bodies and objects.

“We believe that the adult-like organization of infant visual cortex provides a scaffolding that guides the subsequent refinement of responses via experience, ultimately leading to the strongly specialized regions observed in adults,” Deen says.

Saxe and colleagues now hope to try to scan more babies between the ages of 3 and 8 months so they can get a better idea of how these vision-processing regions change over the first several months of life. They also hope to study even younger babies to help them discover when these distinctive brain responses first appear.

Distinctive brain pattern may underlie dyslexia

A distinctive neural signature found in the brains of people with dyslexia may explain why these individuals have difficulty learning to read, according to a new study from MIT neuroscientists.

The researchers discovered that in people with dyslexia, the brain has a diminished ability to acclimate to a repeated input — a trait known as neural adaptation. For example, when dyslexic students see the same word repeatedly, brain regions involved in reading do not show the same adaptation seen in typical readers.

This suggests that the brain’s plasticity, which underpins its ability to learn new things, is reduced, says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, and a member of MIT’s McGovern Institute for Brain Research.

“It’s a difference in the brain that’s not about reading per se, but it’s a difference in perceptual learning that’s pretty broad,” says Gabrieli, who is the study’s senior author. “This is a path by which a brain difference could influence learning to read, which involves so many demands on plasticity.”

Former MIT graduate student Tyler Perrachione, who is now an assistant professor at Boston University, is the lead author of the study, which appears in the Dec. 21 issue of Neuron.

Reduced plasticity

The MIT team used magnetic resonance imaging (MRI) to scan the brains of young adults with and without reading difficulties as they performed a variety of tasks. In the first experiment, the subjects listened to a series of words read by either four different speakers or a single speaker.

The MRI scans revealed distinctive patterns of activity in each group of subjects. In nondyslexic people, areas of the brain that are involved in language showed neural adaption after hearing words said by the same speaker, but not when different speakers said the words. However, the dyslexic subjects showed much less adaptation to hearing words said by a single speaker.

Neurons that respond to a particular sensory input usually react strongly at first, but their response becomes muted as the input continues. This neural adaptation reflects chemical changes in neurons that make it easier for them to respond to a familiar stimulus, Gabrieli says. This phenomenon, known as plasticity, is key to learning new skills.

“You learn something upon the initial presentation that makes you better able to do it the second time, and the ease is marked by reduced neural activity,” Gabrieli says. “Because you’ve done something before, it’s easier to do it again.”

The researchers then ran a series of experiments to test how broad this effect might be. They asked subjects to look at series of the same word or different words; pictures of the same object or different objects; and pictures of the same face or different faces. In each case, they found that in people with dyslexia, brain regions devoted to interpreting words, objects, and faces, respectively, did not show neural adaptation when the same stimuli were repeated multiple times.

“The brain location changed depending on the nature of the content that was being perceived, but the reduced adaptation was consistent across very different domains,” Gabrieli says.

He was surprised to see that this effect was so widespread, appearing even during tasks that have nothing to do with reading; people with dyslexia have no documented difficulties in recognizing objects or faces.

He hypothesizes that the impairment shows up primarily in reading because deciphering letters and mapping them to sounds is such a demanding cognitive task. “There are probably few tasks people undertake that require as much plasticity as reading,” Gabrieli says.

Early appearance

In their final experiment, the researchers tested first and second graders with and without reading difficulties, and they found the same disparity in neural adaptation.

“We got almost the identical reduction in plasticity, which suggests that this is occurring quite early in learning to read,” Gabrieli says. “It’s not a consequence of a different learning experience over the years in struggling to read.”

Gabrieli’s lab now plans to study younger children to see if these differences might be apparent even before children begin to learn to read. They also hope to use other types of brain measurements such as magnetoencephalography (MEG) to follow the time course of the neural adaptation more closely.

The research was funded by the Ellison Medical Foundation, the National Institutes of Health, and a National Science Foundation Graduate Research Fellowship.

A radiation-free approach to imaging molecules in the brain

Scientists hoping to get a glimpse of molecules that control brain activity have devised a new probe that allows them to image these molecules without using any chemical or radioactive labels.

Currently the gold standard approach to imaging molecules in the brain is to tag them with radioactive probes. However, these probes offer low resolution and they can’t easily be used to watch dynamic events, says Alan Jasanoff, an MIT professor of biological engineering.

Jasanoff and his colleagues have developed new sensors consisting of proteins designed to detect a particular target, which causes them to dilate blood vessels in the immediate area. This produces a change in blood flow that can be imaged with magnetic resonance imaging (MRI) or other imaging techniques.

“This is an idea that enables us to detect molecules that are in the brain at biologically low levels, and to do that with these imaging agents or contrast agents that can ultimately be used in humans,” Jasanoff says. “We can also turn them on and off, and that’s really key to trying to detect dynamic processes in the brain.”

In a paper appearing in the Dec. 2 issue of Nature Communications, Jasanoff and his colleagues used these probes to detect enzymes called proteases, but their ultimate goal is to use them to monitor the activity of neurotransmitters, which act as chemical messengers between brain cells.

The paper’s lead authors are postdoc Mitul Desai and former MIT graduate student Adrian Slusarczyk. Recent MIT graduate Ashley Chapin and postdoc Mariya Barch are also authors of the paper.

Indirect imaging

To make their probes, the researchers modified a naturally occurring peptide called calcitonin gene-related peptide (CGRP), which is active primarily during migraines or inflammation. The researchers engineered the peptides so that they are trapped within a protein cage that keeps them from interacting with blood vessels. When the peptides encounter proteases in the brain, the proteases cut the cages open and the CGRP causes nearby blood vessels to dilate. Imaging this dilation with MRI allows the researchers to determine where the proteases were detected.

“These are molecules that aren’t visualized directly, but instead produce changes in the body that can then be visualized very effectively by imaging,” Jasanoff says.

Proteases are sometimes used as biomarkers to diagnose diseases such as cancer and Alzheimer’s disease. However, Jasanoff’s lab used them in this study mainly to demonstrate the validity their approach. Now, they are working on adapting these imaging agents to monitor neurotransmitters, such as dopamine and serotonin, that are critical to cognition and processing emotions.

To do that, the researchers plan to modify the cages surrounding the CGRP so that they can be removed by interaction with a particular neurotransmitter.

“What we want to be able to do is detect levels of neurotransmitter that are 100-fold lower than what we’ve seen so far. We also want to be able to use far less of these molecular imaging agents in organisms. That’s one of the key hurdles to trying to bring this approach into people,” Jasanoff says.

Jeff Bulte, a professor of radiology and radiological science at the Johns Hopkins School of Medicine, described the technique as “original and innovative,” while adding that its safety and long-term physiological effects will require more study.

“It’s interesting that they have designed a reporter without using any kind of metal probe or contrast agent,” says Bulte, who was not involved in the research. “An MRI reporter that works really well is the holy grail in the field of molecular and cellular imaging.”

Tracking genes

Another possible application for this type of imaging is to engineer cells so that the gene for CGRP is turned on at the same time that a gene of interest is turned on. That way, scientists could use the CGRP-induced changes in blood flow to track which cells are expressing the target gene, which could help them determine the roles of those cells and genes in different behaviors. Jasanoff’s team demonstrated the feasibility of this approach by showing that implanted cells expressing CGRP could be recognized by imaging.

“Many behaviors involve turning on genes, and you could use this kind of approach to measure where and when the genes are turned on in different parts of the brain,” Jasanoff says.

His lab is also working on ways to deliver the peptides without injecting them, which would require finding a way to get them to pass through the blood-brain barrier. This barrier separates the brain from circulating blood and prevents large molecules from entering the brain.

The research was funded by the National Institutes of Health BRAIN Initiative, the MIT Simons Center for the Social Brain, and fellowships from the Boehringer Ingelheim Fonds and the Friends of the McGovern Institute.