Fifteen MIT scientists receive NIH BRAIN Initiative grants

Today, the National Institutes of Health (NIH) announced their first round of BRAIN Initiative award recipients. Six teams and 15 researchers from the Massachusetts Institute of Technology were recipients.

Mriganka Sur, principal investigator at the Picower Institute for Learning and Memory and the Paul E. Newton Professor of Neuroscience in MIT’s Department of Brain and Cognitive Sciences (BCS) leads a team studying cortical circuits and information flow during memory-guided perceptual decisions. Co-principal investigators include Emery Brown, BCS professor of computational neuroscience and the Edward Hood Taplin Professor of Medical Engineering; Kwanghun Chung, Picower Institute principal investigator and assistant professor in the Department of Chemical Engineering and the Institute for Medical Engineering and Science (IMES); and Ian Wickersham, research scientist at the McGovern Institute for Brain Research and head of MIT’s Genetic Neuroengineering Group.

Elly Nedivi, Picower Institute principal investigator and professor in BCS and the Department of Biology, leads a team studying new methods for high-speed monitoring of sensory-driven synaptic activity across all inputs to single living neurons in the context of the intact cerebral cortex. Her co-principal investigator is Peter So, professor of mechanical and biological engineering, and director of the MIT Laser Biomedical Research Center.

Ian Wickersham will lead a team looking at novel technologies for nontoxic transsynaptic tracing. His co-principal investigators include Robert Desimone, director of the McGovern Institute and the Doris and Don Berkey Professor of Neuroscience in BCS; Li-Huei Tsai, director of the Picower Institute and the Picower Professor of Neuroscience in BCS; and Kay Tye, Picower Institute principal investigator and assistant professor of neuroscience in BCS.

Robert Desimone will lead a team studying vascular interfaces for brain imaging and stimulation. Co-principal investigators include Ed Boyden, associate professor at the MIT Media Lab, McGovern Institute, and departments of BCS and Biological Engineering; head of MIT’s Synthetic Neurobiology Group, and co-director of MIT’s Center for Neurobiological Engineering; and Elazer Edelman, the Thomas D. and Virginia W. Cabot Professor of Health Sciences and Technology in IMES and director of the Harvard-MIT Biomedical Engineering Center. Collaborators on this project include: Rodolfo Llinas (New York University), George Church (Harvard University), Jan Rabaey (University of California at Berkeley), Pablo Blinder (Tel Aviv University), Eric Leuthardt (Washington University/St. Louis), Michel Maharbiz (Berkeley), Jose Carmena (Berkeley), Elad Alon (Berkeley), Colin Derdeyn (Washington University in St. Louis), Lowell Wood (Bill and Melinda Gates Foundation), Xue Han (Boston University), and Adam Marblestone (MIT).

Ed Boyden will be co-principal investigator with Mark Bathe, associate professor of biological engineering, and Peng Yin of Harvard on a project to study ultra-multiplexed nanoscale in situ proteomics for understanding synapse types.

Alan Jasanoff, associate professor of biological engineering and director of the MIT Center for Neurobiological Engineering, will lead a team looking at calcium sensors for molecular fMRI. Stephen Lippard, the Arthur Amos Noyes Professor of Chemistry, is co-principal investigator.

In addition, Sur and Wickersham also received BRAIN Early Concept Grants for Exploratory Research (EAGER) from the National Science Foundation (NSF). Sur will focus on massive-scale multi-area single neuron recordings to reveal circuits underlying short-term memory. Wickersham, in collaboration with Li-Huei Tsai, Kay Tye, and Robert Desimone, will develop cell-type specific optogenetics in wild-type animals. Additional information about NSF support of the BRAIN initiative can be found at NSF.gov/brain.

The BRAIN Initiative, spearheaded by President Obama in April 2013, challenges the nation’s leading scientists to advance our sophisticated understanding of the human mind and discover new ways to treat, prevent, and cure neurological disorders like Alzheimer’s, schizophrenia, autism, and traumatic brain injury. The scientific community is charged with accelerating the invention of cutting-edge technologies that can produce dynamic images of complex neural circuits and illuminate the interaction of lightning-fast brain cells. The new capabilities are expected to provide greater insights into how brain functionality is linked to behavior, learning, memory, and the underlying mechanisms of debilitating disease. BRAIN was launched with approximately $100 million in initial investments from the NIH, the National Science Foundation, and the Defense Advanced Research Projects Agency (DARPA).

BRAIN Initiative scientists are engaged in a challenging and transformative endeavor to explore how our minds instantaneously processes, store, and retrieve vast quantities of information. Their discoveries will unlock many of the remaining mysteries inherent in the brain’s billions of neurons and trillions of connections, leading to a deeper understanding of the underlying causes of many neurological and psychiatric conditions. Their findings will enable scientists and doctors to develop the groundbreaking arsenal of tools and technologies required to more effectively treat those suffering from these devastating disorders.

NIH awards initial $46 million for BRAIN Initiative research

The National Institutes of Health announced today its first wave of investments totaling $46 million in fiscal year 14 funds to support the goals of the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. More than 100 investigators in 15 states and several countries will work to develop new tools and technologies to understand neural circuit function and capture a dynamic view of the brain in action. These new tools and this deeper understanding will ultimately catalyze new treatments and cures for devastating brain disorders and diseases that are estimated by the World Health Organization to affect more than one billion people worldwide. Six MIT projects were funded, including four projects led by McGovern Institute researchers.

“The human brain is the most complicated biological structure in the known universe. We’ve only just scratched the surface in understanding how it works — or, unfortunately, doesn’t quite work when disorders and disease occur,” said NIH Director Francis S. Collins, M.D., Ph.D. “There’s a big gap between what we want to do in brain research and the technologies available to make exploration possible. These initial awards are part of a 12-year scientific plan focused on developing the tools and technologies needed to make the next leap in understanding the brain. This is just the beginning of an ambitious journey and we’re excited about the possibilities.”

Creating a wearable scanner to image the human brain in motion, using lasers to guide nerve cell firing, recording the entire nervous system in action, stimulating specific circuits with radio waves, and identifying complex circuits with DNA barcodes are among the 58 projects announced today. The majority of the grants focus on developing transformative technologies that will accelerate fundamental neuroscience research and include:

• classifying the myriad cell types in the brain
• producing tools and techniques for analyzing brain cells and circuits
• creating next-generation human brain imaging technology
• developing methods for large-scale recordings of brain activity
• integrating experiments with theories and models to understand the functions of specific brain circuits

“How do the billions of cells in our brain control our thoughts, feelings, and movements? That’s ultimately what the BRAIN Initiative is about,” said Thomas R. Insel, M.D., director of the NIH’s National Institute of Mental Health. “Understanding this will greatly help us meet the rising challenges that brain disorders pose for the future health of the nation.”

Last year, President Obama launched the BRAIN Initiative as a large-scale effort to equip researchers with fundamental insights necessary for treating a wide variety of brain disorders like Alzheimer’s, schizophrenia, autism, epilepsy, and traumatic brain injury. Four federal agencies — NIH, the National Science Foundation, the Food and Drug Administration and the Defense Advanced Research Projects Agency — stepped up to the “grand challenge” and committed more than $110 million to the Initiative for fiscal year 2014. Planning for the NIH component of the BRAIN initiative is guided by the long-term scientific plan, “BRAIN 2025: A Scientific Vision” that details seven high-priority research areas.

Later today, the White House is hosting a conference on the BRAIN Initiative where new Federal and private sector commitments will be unveiled in support of this ambitious and important effort.

“We are at a critical juncture for brain research, and these audacious projects are from some of the brightest researchers in neuroscience collaborating with physicists and engineers,” said Story Landis, Ph.D., director of the NIH’s National Institute of Neurological Disorders and Stroke.

For a list of all the projects, please visit: http://braininitiative.nih.gov/nih-brain-awards.htm

For more information about the BRAIN Initiative, please visit: http://www.nih.gov/science/brain/

###
About the National Institutes of Health (NIH): NIH, the nation’s medical research agency, includes 27 Institutes and Centers and is a component of the U.S. Department of Health and Human Services. NIH is the primary federal agency conducting and supporting basic, clinical, and translational medical research, and is investigating the causes, treatments, and cures for both common and rare diseases. For more information about NIH and its programs, visit http://www.nih.gov.

McGovern neuroscientists identify key role of language gene

Researchers from MIT and several European universities have shown that the human version of a gene called Foxp2 makes it easier to transform new experiences into routine procedures. When they engineered mice to express humanized Foxp2, the mice learned to run a maze much more quickly than normal mice.

The findings suggest that Foxp2 may help humans with a key component of learning language — transforming experiences, such as hearing the word “glass” when we are shown a glass of water, into a nearly automatic association of that word with objects that look and function like glasses, says Ann Graybiel, an MIT Institute Professor, member of MIT’s McGovern Institute for Brain Research, and a senior author of the study.

“This really is an important brick in the wall saying that the form of the gene that allowed us to speak may have something to do with a special kind of learning, which takes us from having to make conscious associations in order to act to a nearly automatic-pilot way of acting based on the cues around us,” Graybiel says.

Wolfgang Enard, a professor of anthropology and human genetics at Ludwig-Maximilians University in Germany, is also a senior author of the study, which appears in the Proceedings of the National Academy of Sciences this week. The paper’s lead authors are Christiane Schreiweis, a former visiting graduate student at MIT, and Ulrich Bornschein of the Max Planck Institute for Evolutionary Anthropology in Germany.

All animal species communicate with each other, but humans have a unique ability to generate and comprehend language. Foxp2 is one of several genes that scientists believe may have contributed to the development of these linguistic skills. The gene was first identified in a group of family members who had severe difficulties in speaking and understanding speech, and who were found to carry a mutated version of the Foxp2 gene.

In 2009, Svante Pääbo, director of the Max Planck Institute for Evolutionary Anthropology, and his team engineered mice to express the human form of the Foxp2 gene, which encodes a protein that differs from the mouse version by only two amino acids. His team found that these mice had longer dendrites — the slender extensions that neurons use to communicate with each other — in the striatum, a part of the brain implicated in habit formation. They were also better at forming new synapses, or connections between neurons.

Pääbo, who is also an author of the new PNAS paper, and Enard enlisted Graybiel, an expert in the striatum, to help study the behavioral effects of replacing Foxp2. They found that the mice with humanized Foxp2 were better at learning to run a T-shaped maze, in which the mice must decide whether to turn left or right at a T-shaped junction, based on the texture of the maze floor, to earn a food reward.

The first phase of this type of learning requires using declarative memory, or memory for events and places. Over time, these memory cues become embedded as habits and are encoded through procedural memory — the type of memory necessary for routine tasks, such as driving to work every day or hitting a tennis forehand after thousands of practice strokes.

Using another type of maze called a cross-maze, Schreiweis and her MIT colleagues were able to test the mice’s ability in each of type of memory alone, as well as the interaction of the two types. They found that the mice with humanized Foxp2 performed the same as normal mice when just one type of memory was needed, but their performance was superior when the learning task required them to convert declarative memories into habitual routines. The key finding was therefore that the humanized Foxp2 gene makes it easier to turn mindful actions into behavioral routines.

The protein produced by Foxp2 is a transcription factor, meaning that it turns other genes on and off. In this study, the researchers found that Foxp2 appears to turn on genes involved in the regulation of synaptic connections between neurons. They also found enhanced dopamine activity in a part of the striatum that is involved in forming procedures. In addition, the neurons of some striatal regions could be turned off for longer periods in response to prolonged activation — a phenomenon known as long-term depression, which is necessary for learning new tasks and forming memories.

Together, these changes help to “tune” the brain differently to adapt it to speech and language acquisition, the researchers believe. They are now further investigating how Foxp2 may interact with other genes to produce its effects on learning and language.

This study “provides new ways to think about the evolution of Foxp2 function in the brain,” says Genevieve Konopka, an assistant professor of neuroscience at the University of Texas Southwestern Medical Center who was not involved in the research. “It suggests that human Foxp2 facilitates learning that has been conducive for the emergence of speech and language in humans. The observed differences in dopamine levels and long-term depression in a region-specific manner are also striking and begin to provide mechanistic details of how the molecular evolution of one gene might lead to alterations in behavior.”

The research was funded by the Nancy Lurie Marks Family Foundation, the Simons Foundation Autism Research Initiative, the National Institutes of Health, the Wellcome Trust, the Fondation pour la Recherche Médicale and the Max Planck Society.

MEG matters

Somewhere nearby, most likely, sits a coffee mug. Give it a glance. An image of that mug travels from desktop to retina and into the brain, where it is processed, categorized and recognized, within a fraction of a second.

All this feels effortless to us, but programming a computer to do the same reveals just how complex that process is. Computers can handle simple objects in expected positions, such as an upright mug. But tilt that cup on its side? “That messes up a lot of standard computer vision algorithms,” says Leyla Isik, a graduate student in Tomaso Poggio’s lab at the McGovern Institute.

For her thesis research, Isik is working to build better computer vision models, inspired by how human brains recognize objects. But to track this process, she needed an imaging tool that could keep up with the brain’s astonishing speed. In 2011, soon after Isik arrived at MIT, the McGovern Institute opened its magnetoencephalography (MEG) lab, one of only a few dozens in the entire country. MEG operates on the same timescale as the human brain. Now, with easy access to a MEG facility dedicated to brain research, neuroscientists at McGovern and across MIT—even those like Isik who had never scanned human subjects—are delving into human neural processing in ways never possible before.

The making of…

MEG was developed at MIT in the early 1970s by physicist David Cohen. He was searching for the tiny magnetic fields that were predicted to arise within electrically active tissues such as the brain. Magnetic fields can travel unimpeded through the skull, so Cohen hoped it might be possible to detect them noninvasively. Because the signals are so small—a billion times weaker than the magnetic field of the Earth—Cohen experimented with a newly invented device called a SQUID (short for superconducting quantum interference device), a highly sensitive magnetometer. In 1972, he succeeded in recording alpha waves, brain rhythms that occur when the eyes close. The recording scratched out on yellow graph paper with notes scrawled in the margins, led to a seminal paper that launched a new field. Cohen’s prototype has now evolved into a sophisticated machine with an array of 306 SQUID detectors contained within a helmet that sits over the subject’s head like a giant hairdryer.

As MEG technology advanced, neuroscientists watched with growing interest. Animal studies were revealing the importance of high-frequency electrical oscillations such as gamma waves, which appear to have a key role in the communication between different brain regions. But apart from occasional neurosurgery patients, it was very difficult to study these signals in the human brain or to understand how they might contribute to human cognition. The most widely used imaging method, functional magnetic resonance imaging (fMRI) could provide precise spatial localization, but it could not detect events on the necessary millisecond timescale. “We needed to bridge that gap,” says Robert Desimone, director of the McGovern Institute.

Desimone decided to make MEG a priority, and with support from donors including Thomas F. Peterson, Jr., Edward and Kay Poitras, and the Simons Foundation, the institute was able to purchase a Triux scanner from Elekta, the newest model on the market and the first to be installed in North America.

One challenge was the high level of magnetic background noise from the surrounding environment, and so the new scanner was installed in a 13-ton shielded room that deflects interference away from the scanner. “We have a challenging location, but we were able to work with it and to get clear signals,” says Desimone.

“An engineer might have picked a different site, but we cannot overstate the importance of having MEG right here, next to the MRI scanners and easily accessible for our researchers.”

To run the new lab, Desimone recruited Dimitrios Pantazis, an expert in MEG signal processing from the University of Southern California. Pantazis knew a lot about MEG data analysis, but he had never actually scanned human subjects himself. In March 2011, he watched in anticipation as Elekta engineers uncrated the new system. Within a few months, he had the lab up and running.

Computer vision quest

When the MEG lab opened, Isik attended a training session. Like Pantazis, she had no previous experience scanning human subjects, but MEG seemed an ideal tool for teasing out the complexities of human object recognition.

She recorded the brain activity of volunteers as they viewed images of objects in various orientations. She also asked them to track the color of a cross on each image, partly to keep their eyes on the screen and partly to keep them alert. “It’s a dark and quiet room and a comfy chair,” she says. “You have to give them something to do to keep them awake.”

To process the data, Isik used a computational tool called a machine learning classifier, which learns to recognize patterns of brain activity evoked by different stimuli. By comparing responses to different types of objects, or similar objects from different viewpoints (such as a cup lying on its side), she was able to show that the human visual system processes objects in stages, starting with the specific view and then generalizing to features that are independent of the size and position of the object.

Isik is now working to develop a computer model that simulates this step-wise processing. “Having this data to work with helps ground my models,” she says. Meanwhile, Pantazis was impressed by the power of machine learning classifiers to make sense of the huge quantities of data produced by MEG studies. With support from the National Science Foundation, he is working to incorporate them into a software analysis package that is widely used by the MEG community.

Mixology

Because fMRI and MEG provide complementary information, it was natural that researchers would want to combine them. This is a computationally challenging task, but MIT research scientist Aude Oliva and postdoc Radoslaw Cichy, in collaboration with Pantazis, have developed a new way to do so. They presented 92 images to volunteers subjects, once in the MEG scanner, and then again in the MRI scanner across the hall. For each data set, they looked for patterns of similarity between responses to different stimuli. Then, by aligning the two ‘similarity maps,’ they could determine which MEG signals correspond to which fMRI signals, providing information about the location and timing of brain activity that could not be revealed by either method in isolation. “We could see how visual information flows from the rear of the brain to the more anterior regions where objects are recognized and categorized,” says Pantazis. “It all happens within a few hundred milliseconds. You could not see this level of detail without the combination of fMRI and MEG.”

Another study combining fMRI and MEG data focused on attention, a longstanding research interest for Desimone. Daniel Baldauf, a postdoc in Desimone’s lab, shares that fascination. “Our visual experience is amazingly rich,” says Baldauf. “Most mysteries about how we deal with all this information boil down to attention.”

Baldauf set out to study how the brain switches attention between two well-studied object categories, faces and houses. These stimuli are known to be processed by different brain areas, and Baldauf wanted to understand how signals might be routed to one area or the other during shifts of attention. By scanning subjects with MEG and fMRI, Baldauf identified a brain region, the inferior frontal junction (IFJ), that synchronizes its gamma oscillations with either the face or house areas depending on which stimulus the subject was attending to—akin to tuning a radio to a particular station.

Having found a way to trace attention within the brain, Desimone and his colleagues are now testing whether MEG can be used to improve attention. Together with Baldauf and two visiting students, Yasaman Bagherzadeh and Ben Lu, he has rigged the scanner so that subjects can be given feedback on their own activity on a screen in real time as it is being recorded. “By concentrating on a task, participants can learn to steer their own brain activity,” says Baldauf, who hopes to determine whether these exercises can help people perform better on everyday tasks that require attention.

Comfort zone

In addition to exploring basic questions about brain function, MEG is also a valuable tool for studying brain disorders such as autism. Margaret Kjelgaard, a clinical researcher at Massachusetts General Hospital, is collaborating with MIT faculty member Pawan Sinha to understand why people with autism often have trouble tolerating sounds, smells, and lights. This is difficult to study using fMRI, because subjects are often unable to tolerate the noise of the scanner, whereas they find MEG much more comfortable.

“Big things are probably going to happen here.”
— David Cohen, inventor of MEG technology

In the scanner, subjects listened to brief repetitive sounds as their brain responses were recorded. In healthy controls, the responses became weaker with repetition as the subjects adapted to the sounds. Those with autism, however, did not adapt. The results are still preliminary and as-yet unpublished, but Kjelgaard hopes that the work will lead to a biomarker for autism, and perhaps eventually for other disorders. In 2012, the McGovern Institute organized a symposium to mark the opening of the new lab. Cohen, who had invented MEG forty years earlier, spoke at the event and made a prediction: “Big things are probably going to happen here.” Two years on, researchers have pioneered new MEG data analysis techniques, invented novel ways to combine MEG and fMRI, and begun to explore the neural underpinnings of autism. Odds are, there are more big things to come.