Biologists discover function of gene linked to familial ALS

MIT biologists have discovered a function of a gene that is believed to account for up to 40 percent of all familial cases of amyotrophic lateral sclerosis (ALS). Studies of ALS patients have shown that an abnormally expanded region of DNA in a specific region of this gene can cause the disease.

In a study of the microscopic worm Caenorhabditis elegans, the researchers found that the gene has a key role in helping cells to remove waste products via structures known as lysosomes. When the gene is mutated, these unwanted substances build up inside cells. The researchers believe that if this also happens in neurons of human ALS patients, it could account for some of those patients’ symptoms.

“Our studies indicate what happens when the activities of such a gene are inhibited — defects in lysosomal function. Certain features of ALS are consistent with their being caused by defects in lysosomal function, such as inflammation,” says H. Robert Horvitz, the David H. Koch Professor of Biology at MIT, a member of the McGovern Institute for Brain Research and the Koch Institute for Integrative Cancer Research, and the senior author of the study.

Mutations in this gene, known as C9orf72, have also been linked to another neurodegenerative brain disorder known as frontotemporal dementia (FTD), which is estimated to affect about 60,000 people in the United States.

“ALS and FTD are now thought to be aspects of the same disease, with different presentations. There are genes that when mutated cause only ALS, and others that cause only FTD, but there are a number of other genes in which mutations can cause either ALS or FTD or a mixture of the two,” says Anna Corrionero, an MIT postdoc and the lead author of the paper, which appears in the May 3 issue of the journal Current Biology.

Genetic link

Scientists have identified dozens of genes linked to familial ALS, which occurs when two or more family members suffer from the disease. Doctors believe that genetics may also be a factor in nonfamilial cases of the disease, which are much more common, accounting for 90 percent of cases.

Of all ALS-linked mutations identified so far, the C9orf72 mutation is the most prevalent, and it is also found in about 25 percent of frontotemporal dementia patients. The MIT team set out to study the gene’s function in C. elegans, which has an equivalent gene known as alfa-1.

In studies of worms that lack alfa-1, the researchers discovered that defects became apparent early in embryonic development. C. elegans embryos have a yolk that helps to sustain them before they hatch, and in embryos missing alfa-1, the researchers found “blobs” of yolk floating in the fluid surrounding the embryos.

This led the researchers to discover that the gene mutation was affecting the lysosomal degradation of yolk once it is absorbed into the cells. Lysosomes, which also remove cellular waste products, are cell structures which carry enzymes that can break down many kinds of molecules.

When lysosomes degrade their contents — such as yolk — they are reformed into tubular structures that split, after which they are able to degrade other materials. The MIT team found that in cells with the alfa-1 mutation and impaired lysosomal degradation, lysosomes were unable to reform and could not be used again, disrupting the cell’s waste removal process.

“It seems that lysosomes do not reform as they should, and material accumulates in the cells,” Corrionero says.

For C. elegans embryos, that meant that they could not properly absorb the nutrients found in yolk, which made it harder for them to survive under starvation conditions. The embryos that did survive appeared to be normal, the researchers say.

Robert Brown, chair of the Department of Neurology at the University of Massachusetts Medical School, describes the study as a major contribution to scientists’ understanding of the normal function of the C9orf72 gene.

“They used the power of worm genetics to dissect very fully the stages of vesicle maturation at which this gene seems to play a major role,” says Brown, who was not involved in the study.

Neuronal effects

The researchers were able to partially reverse the effects of alfa-1 loss in the C. elegans embryos by expressing the human protein encoded by the C9orf72 gene. “This suggests that the worm and human proteins are performing the same molecular function,” Corrionero says.

If loss of C9orf72 affects lysosome function in human neurons, it could lead to a slow, gradual buildup of waste products in those cells. ALS usually affects cells of the motor cortex, which controls movement, and motor neurons in the spinal cord, while frontotemporal dementia affects the frontal areas of the brain’s cortex.

“If you cannot degrade things properly in cells that live for very long periods of time, like neurons, that might well affect the survival of the cells and lead to disease,” Corrionero says.

Many pharmaceutical companies are now researching drugs that would block the expression of the mutant C9orf72. The new study suggests certain possible side effects to watch for in studies of such drugs.

“If you generate drugs that decrease C9orf72 expression, you might cause problems in lysosomal homeostasis,” Corrionero says. “In developing any drug, you have to be careful to watch for possible side effects. Our observations suggest some things to look for in studying drugs that inhibit C9orf72 in ALS/FTD patients.”

The research was funded by an EMBO postdoctoral fellowship, an ALS Therapy Alliance grant, a gift from Rose and Douglas Barnard ’79 to the McGovern Institute, and a gift from the Halis Family Foundation to the MIT Aging Brain Initiative.

McGovern Institute awards 2018 Scolnick Prize to David Anderson

The McGovern Institute for Brain Research at MIT announced today that David J. Anderson of Caltech is the winner of the 2018 Edward M. Scolnick Prize in Neuroscience. He was awarded the prize for his contributions to the isolation and characterization of neural stem cells and for his research on neural circuits that control emotional behaviors in animal models. The Scolnick Prize is awarded annually by the McGovern Institute to recognize outstanding advances in any field of neuroscience.

“We congratulate David Anderson on being selected for this award,” says Robert Desimone, director of the McGovern Institute and chair of the selection committee. “His work has provided fundamental insights into neural development and the structure and function of neural circuits.”

Anderson is the Seymour Benzer Professor of Biology at Caltech, where he has been on the faculty since 1986, and is currently the director of the Tianqiao and Chrissy Chen Institute for Neuroscience. He is also an investigator of the Howard Hughes Medical Institute. He received his PhD in cell biology from Rockefeller University, where he trained with the late Günter Blobel, and he received his postdoctoral training in molecular biology with Richard Axel at Columbia University.

For the first 20 years of his career, Anderson focused his research on the biology of neural stem cells and was the first to isolate a multipotent stem cell from the mammalian nervous system. He subsequently identified growth factors and master transcriptional regulators that control their differentiation into neurons and glial cells. Anderson also made the unexpected and fundamental discovery that arteries and veins are genetically distinct even before the heart begins to beat. Combining this discovery with his interest in neural development, Anderson went on to contribute to the expanding field of vessel identity and the study of molecular cross-talk between developing nerves and blood vessels.

More recently, Anderson has shifted his focus from neural development to the study of neural circuits that control emotional behaviors, such as fear, anxiety, and aggression, in animal models. Anderson has employed various technologies for neural circuit manipulation including optogenetics, pharmacogenetics, electrophysiology, in vivo imaging, and quantitative behavior analysis using machine vision-based approaches. He developed and applied powerful genetic methods to identify and manipulate cells and circuits involved in emotional behaviors in mice — including ways to inactivate neurons reversibly and to trace their synaptic targets. In addition to this work on vertebrate neural circuitry, Anderson mounted a parallel inquiry that dissects the genes and circuits underlying aggressive behavior in the fruitfly Drosophila melanogaster, and has become an international leader in this rapidly developing field.

Among his many honors and awards, Anderson is a Perl-UNC Neuroscience Prize recipient, a fellow of the American Academy of Arts and Sciences and a member of the National Academy of Sciences. Anderson also played a key role in the foundation of the Allen Institute for Brain Sciences and the Allen Brain Atlas, a comprehensive open-source atlas of gene expression in the mouse brain.

Anderson will deliver the Scolnick Prize lecture at the McGovern Institute on Friday, Sept. 21 at 4 p.m. in the Singleton Auditorium of MIT’s Brain and Cognitive Sciences Complex (Room 46-3002). The event is free and open to the public.

National Academy of Sciences elects four MIT professors for 2018

Four MIT faculty members have been elected to the National Academy of Sciences (NAS) in recognition of their “distinguished and continuing achievements in original research.”

MIT’s four new NAS members are: Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics; Mehran Kardar, the Francis Friedman Professor of Physics; Xiao-Gang Wen, the Cecil and Ida Green Professor of Physics; and Feng Zhang, the Patricia and James Poitras ’63 Professor in Neuroscience at MIT, associate professor of brain and cognitive sciences and of biological engineering, and member of the McGovern Institute for Brain Research and the Broad Institute

The group was among 84 new members and 21 new foreign associates elected to the NAS. Membership in the NAS is one of the most significant honors given to academic researchers.

Amy Finkelstein

Finkelstein is the co-scientific director of J-PAL North America, the co-director of the Public Economics Program at the National Bureau of Economic Research, a member of the Institute of Medicine and the American Academy of Arts and Sciences, and a fellow of the Econometric Society.

She has received numerous awards and fellowships including the John Bates Clark Medal (2012), the American Society of Health Economists’ ASHEcon Medal (2014), a Presidential Early Career Award for Scientists and Engineers (2009), the American Economic Association’s Elaine Bennett Research Prize (2008) and a Sloan Research Fellowship (2007). She has also received awards for graduate student teaching (2012) and graduate student advising (2010) at MIT.

She is one of the two principal investigators for the Oregon Health Insurance Experiment, a randomized evaluation of the impact of extending Medicaid coverage to low income, uninsured adults.

Mehran Kardar

Kardar obtained a BA from Cambridge University in 1979 and a PhD in physics from MIT in 1983. He was a junior fellow of the Harvard Society of Fellows from 1983 to 1986 before returning to MIT as an assistant professor, and was promoted to full professor in 1996. He has been a visiting professor at a number of institutions including Catholic University in Belgium, Oxford University, the University of California at Santa Barbara, the University of California at Berkeley, and Ecole Normale Superieure in Paris.

His expertise is in statistical physics, and he has lectured extensively on this topic at MIT and in workshops at universities and institutes in France, the U.K., Switzerland, and Finland. He is the author of two books based on these lectures. In 2018 he was recognized by the American Association of Physics Teachers with the John David Jackson Excellence in Graduate Physics Education Award.

Kardar is a member of the founding board of the New England Complex Science Institute and the editorial board of Journal of Statistical Physics, and has helped organize Gordon Conference and KITP workshops. His awards include the Bergmann memorial research award, the A. P. Sloan Fellowship, the Presidential Young Investigator award, the Edgerton award for junior faculty achievements (MIT), and the Guggenheim Fellowship. He is a fellow of the American Physical Society and the American Academy of Arts and Sciences.

Xiao-Gang Wen

Wen received a BS in physics from University of Science and Technology of China in 1982 and a PhD in physics from Princeton University in 1987.

He studied superstring theory under theoretical physicist Edward Witten at Princeton University and later switched his research field to condensed matter physics while working with theoretical physicists Robert Schrieffer, Frank Wilczek, and Anthony Zee in the Institute for Theoretical Physics at the University of California at Santa Barbara (1987–1989). He became a five-year member of the Institute for Advanced Study at Princeton University in 1989 and joined MIT in 1991. Wen is the Cecil and Ida Green professor of Physics at MIT, a Distinguished Moore Scholar at Caltech, and a Distinguished Research Chair at the Perimeter Institute. In 2017 he received the Oliver E. Buckley Condensed Matter Physics Prize of the American Physical Society.

Wen’s main research area is condensed matter theory. His interests include strongly correlated electronic systems, topological order and quantum order, high-temperature superconductors, the origin and unification of elementary particles, and the Quantum Hall Effect and non-Abelian statistics.

Feng Zhang

Zhang is a bioengineer focused on developing tools to better understand nervous system function and disease. His lab applies these novel tools to interrogate gene function and study neuropsychiatric disorders in animal and stem cell models. Since joining MIT and the Broad Institute in January 2011, Zhang has pioneered the development of genome editing tools for use in eukaryotic cells — including human cells — from natural microbial CRISPR systems. He also developed a breakthrough technology called optogenetics with Karl Deisseroth at Stanford University and Edward Boyden, now of MIT.

Zhang joined MIT and the Broad Institute in 2011 and was awarded tenure in 2016. He received his BA in chemistry and physics from Harvard College and his PhD in chemistry from Stanford University. Zhang’s award include the Perl/UNC Prize in Neuroscience (2012, shared with Karl Deisseroth and Ed Boyden), the National Institutes of Health Director’s Pioneer Award (2012), the National Science Foundation’s Alan T. Waterman Award (2014), the Jacob Heskel Gabbay Award in Biotechnology and Medicine (2014, shared with Jennifer Doudna and Emmanuelle Charpentier), the Society for Neuroscience Young Investigator Award (2014), the Okazaki award, the Canada Gairdner International Award (shared with Doudna and Charpentier along with Philippe Horvath and Rodolphe Barrangou) and the 2016 Tang Prize (shared with Doudna and Charpentier).

Zhang is a founder of Editas Medicine, a genome editing company founded by world leaders in the fields of genome editing, protein engineering, and molecular and structural biology.

Calcium-based MRI sensor enables more sensitive brain imaging

MIT neuroscientists have developed a new magnetic resonance imaging (MRI) sensor that allows them to monitor neural activity deep within the brain by tracking calcium ions.

Because calcium ions are directly linked to neuronal firing — unlike the changes in blood flow detected by other types of MRI, which provide an indirect signal — this new type of sensing could allow researchers to link specific brain functions to their pattern of neuron activity, and to determine how distant brain regions communicate with each other during particular tasks.

“Concentrations of calcium ions are closely correlated with signaling events in the nervous system,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We designed a probe with a molecular architecture that can sense relatively subtle changes in extracellular calcium that are correlated with neural activity.”

In tests in rats, the researchers showed that their calcium sensor can accurately detect changes in neural activity induced by chemical or electrical stimulation, deep within a part of the brain called the striatum.

MIT research associates Satoshi Okada and Benjamin Bartelle are the lead authors of the study, which appears in the April 30 issue of Nature Nanotechnology. Other authors include professor of brain and cognitive sciences and Picower Institute for Learning and Memory member Mriganka Sur, Research Associate Nan Li, postdoc Vincent Breton-Provencher, former postdoc Elisenda Rodriguez, Wellesley College undergraduate Jiyoung Lee, and high school student James Melican.

Tracking calcium

A mainstay of neuroscience research, MRI allows scientists to identify parts of the brain that are active during particular tasks. The most commonly used type, known as functional MRI, measures blood flow in the brain as an indirect marker of neural activity. Jasanoff and his colleagues wanted to devise a way to map patterns of neural activity with specificity and resolution that blood-flow-based MRI techniques can’t achieve.

“Methods that are able to map brain activity in deep tissue rely on changes in blood flow, and those are coupled to neural activity through many different physiological pathways,” Jasanoff says. “As a result, the signal you see in the end is often difficult to attribute to any particular underlying cause.”

Calcium ion flow, on the other hand, can be directly linked with neuron activity. When a neuron fires an electrical impulse, calcium ions rush into the cell. For about a decade, neuroscientists have been using fluorescent molecules to label calcium in the brain and image it with traditional microscopy. This technique allows them to precisely track neuron activity, but its use is limited to small areas of the brain.

The MIT team set out to find a way to image calcium using MRI, which enables much larger tissue volumes to be analyzed. To do that, they designed a new sensor that can detect subtle changes in calcium concentrations outside of cells and respond in a way that can be detected with MRI.

The new sensor consists of two types of particles that cluster together in the presence of calcium. One is a naturally occurring calcium-binding protein called synaptotagmin, and the other is a magnetic iron oxide nanoparticle coated in a lipid that can also bind to synaptotagmin, but only when calcium is present.

Calcium binding induces these particles to clump together, making them appear darker in an MRI image. High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses.

Detecting brain activity

To test the sensors, the researchers injected them into the striatum of rats, a region that is involved in planning movement and learning new behaviors. They then gave the rats a chemical stimulus that induces short bouts of neural activity, and found that the calcium sensor reflected this activity.

They also found that the sensor picked up activity induced by electrical stimulation in a part of the brain involved in reward.

This approach provides a novel way to examine brain function, says Xin Yu, a research group leader at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany, who was not involved in the research.

“Although we have accumulated sufficient knowledge on intracellular calcium signaling in the past half-century, it has seldom been studied exactly how the dynamic changes in extracellular calcium contribute to brain function, or serve as an indicator of brain function,” Yu says. “When we are deciphering such a complicated and self-adapted system like the brain, every piece of information matters.”

The current version of the sensor responds within a few seconds of the initial brain stimulation, but the researchers are working on speeding that up. They are also trying to modify the sensor so that it can spread throughout a larger region of the brain and pass through the blood-brain barrier, which would make it possible to deliver the particles without injecting them directly to the test site.

With this kind of sensor, Jasanoff hopes to map patterns of neural activity with greater precision than is now possible. “You could imagine measuring calcium activity in different parts of the brain and trying to determine, for instance, how different types of sensory stimuli are encoded in different ways by the spatial pattern of neural activity that they induce,” he says.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Nancy Kanwisher receives 2018 Heineken Prize

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named a recipient of the 2018 Heineken Prize — the Netherlands’ most prestigious scientific prize — for her work on the functional organization of the human brain.

Kanwisher, who is a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, uses neuroimaging to study the functional organization of the human brain. Over the last 20 years her lab has played a central role in the identification of regions of the human brain that are engaged in particular components of perception and cognition. Many of these regions are very specifically engaged in a single mental function such as perceiving faces, places, bodies, or words, or understanding the meanings of sentences or the mental states of others. These regions form a “neural portrait of the human mind,” according to Kanwisher, who has assembled dozens of videos for the general public on her website, NancysBrainTalks.

“Nancy Kanwisher is an exceptionally innovative and influential researcher in cognitive neuropsychology and the neurosciences,” according to the Royal Netherlands Academy of Arts and Sciences, the organization that selects the prizewinners. “She is being recognized with the 2018 C.L. de Carvalho-Heineken Prize for Cognitive Science for her highly original, meticulous and cogent research on the functional organization of the human brain.”

Kanwisher is among five international scientists who have been recognized by the academy with the biennial award. The other winners include biomedical scientist Peter Carmeliet  of the University of Leuven, biologist Paul Hebert of the University of Guelph, historian John R. McNeill of Georgetown University, and biophysicist Xiaowei Zhuang of Harvard University.

The Heineken Prizes, each worth $200,000, are named after Henry P. Heineken (1886-1971); Alfred H. Heineken (1923-2002) and Charlene de Carvalho-Heineken (1954), chair of the Dr H.P. Heineken Foundation and the Alfred Heineken Fondsen Foundation, which fund the prizes. The laureates are selected by juries assembled by the academy and made up of leading Dutch and foreign scientists and scholars.

The Heineken Prizes will be presented at an award ceremony on Sept. 27 in Amsterdam.

Eight from MIT elected to American Academy of Arts and Sciences for 2018

Eight MIT faculty members are among 213 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced today.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Alexei Borodin, professor of mathematics;
  • Gang Chen, the Carl Richard Soderberg Professor of Power Engineering and head of the Department of Mechanical Engineering;
  • Larry D. Guth; professor of mathematics;
  • Parag A. Pathak, the Jane Berkowitz Carlton and Dennis William Carlton Professor of Microeconomics;
  • Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics and head of the Department of Economics;
  • Leigh H. Royden, professor of earth, atmostpheric, and planetary sciences;
  • Sara Seager, the Class of 1941 Professor in the Department of Earth, Atmospheric and Planetary Sciences with a joint appointment in the Department of Physics; and
  • Feng Zhang, the James and Patricia Poitras Professor of Neuroscience within the departments of Brain and Cognitive Sciences and Biological Engineering, and an investigator at the McGovern Institute for Brain Research at MIT.

“This class of 2018 is a testament to the academy’s ability to both uphold our 238-year commitment to honor exceptional individuals and to recognize new expertise,” said Nancy C. Andrews, chair of the board of the American Academy.

“Membership in the academy is not only an honor, but also an opportunity and a responsibility,” added Jonathan Fanton, president of the American Academy. “Members can be inspired and engaged by connecting with one another and through academy projects dedicated to the common good. The intellect, creativity, and commitment of the 2018 class will enrich the work of the academy and the world in which we live.”

The new class will be inducted at a ceremony in October in Cambridge, Massachusetts.

Since its founding in 1780, the academy has elected leading “thinkers and doers” from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 200 Nobel laureates and 100 Pulitzer Prize winners.

Engineering intelligence

Go is an ancient board game that demands not only strategy and logic, but intuition, creativity, and subtlety—in other words, it’s a game of quintessentially human abilities. Or so it seemed, until Google’s DeepMind AI program, AlphaGo, roundly defeated the world’s top Go champion.

But ask it to read social cues or interpret what another person is thinking and it wouldn’t know where to start. It wouldn’t even understand that it didn’t know where to start. Outside of its game-playing milieu, AlphaGo is as smart as a rock.

“The problem of intelligence is the greatest problem in science,” says Tomaso Poggio, Eugene McDermott Professor of Brain and Cognitive Sciences at the McGovern Institute. One reason why? We still don’t really understand intelligence in ourselves.

Right now, most advanced AI developments are led by industry giants like Facebook, Google, Tesla and Apple, with an emphasis on engineering and computation, and very little work in humans. That has yielded enormous breakthroughs including Siri and Alexa, ever-better autonomous cars and AlphaGo.

But as Poggio points out, the algorithms behind most of these incredible technologies come right out of past neuroscience research–deep learning networks and reinforcement learning. “So it’s a good bet,” Poggio says, “that one of the next breakthroughs will also come from neuroscience.”

Five years ago, Poggio and a host of researchers at MIT and beyond took that bet when they applied for and won a $25 million Science and Technology Center award from the National Science Foundation to form the Center for Brains, Minds and Machines. The goal of the center was to take those computational approaches and blend them with basic, curiosity-driven research in neuroscience and cognition. They would knock down the divisions that traditionally separated these fields and not only unlock the secrets of human intelligence and develop smarter AIs, but found an entire new field—the science and engineering of intelligence.

A collaborative foundation

CBMM is a sprawling research initiative headquartered at the McGovern Institute, encompassing faculty at Harvard, Johns Hopkins, Rockefeller and Stanford; over a dozen industry collaborators including Siemens, Google, Toyota, Microsoft, Schlumberger and IBM; and partner institutions such as Howard University, Wellesley College and the University of Puerto Rico. The effort has already churned out 397 publications and has just been renewed for five more years and another $25 million.

For the first few years, collaboration in such a complex center posed a challenge. Research efforts were still divided into traditional silos—one research thrust for cognitive science, another for computation, and so on. But as the center grew, colleagues found themselves talking more and a new common language emerged. Immersed in each other’s research, the divisions began to fade.

“It became more than just a center in name,” says Matthew Wilson, associate director of CBMM and the Sherman Fairchild Professor of Neuroscience at MIT’s Department of Brain and Cognitive Sciences (BCS). “It really was trying to drive a new way of thinking about research and motivating intellectual curiosity that was motivated by this shared vision that all the participants had.”

New questioning

Today, the center is structured around four interconnected modules grounded around the problem of visual intelligence—vision, because it is the most understood and easily traced of our senses. The first module, co-directed by Poggio himself, unravels the visual operations that begin within that first few milliseconds of visual recognition as the information travels through the eye and to the visual cortex. Gabriel Kreiman, who studies visual comprehension at Harvard Medical School and Children’s Hospital, leads the second module which takes on the subsequent events as the brain directs the eye where to go next, what it is seeing and what to pay attention to, and then integrates this information into a holistic picture of the world that we experience. His research questions have grown as a result of CBMM’s cross-disciplinary influence.

Leyla Isik, a postdoc in Kreiman’s lab, is now tackling one of his new research initiatives: social intelligence. “So much of what we do and see as humans are social interactions between people. But even the best machines have trouble with it,” she explains.

To reveal the underlying computations of social intelligence, Isik is using data gathered from epilepsy patients as they watch full-length movies. (Certain epileptics spend several weeks before surgery with monitoring electrodes in their brains, providing a rare opportunity for scientists to see inside the brain of a living, thinking human). Isik hopes to be able to pick out reliable patterns in their neural activity that indicate when the patient is processing certain social cues such as faces. “It’s a pretty big challenge, so to start out we’ve tried to simplify the problem a little bit and just look at basic social visual phenomenon,” she explains.

In true CBMM spirit, Isik is co-advised by another McGovern investigator, Nancy Kanwisher, who helps lead CBMM’s third module with BCS Professor of Computational Cognitive Science, Josh Tenenbaum. That module picks up where the second leaves off, asking still deeper questions about how the brain understands complex scenes, and how infants and children develop the ability to piece together the physics and psychology of new events. In Kanwisher’s lab, instead of a stimulus-heavy movie, Isik shows simple stick figures to subjects in an MRI scanner. She’s looking for specific regions of the brain that engage only when the subjects view the “social interactions” between the figures. “I like the approach of tackling this problem both from very controlled experiments as well as something that’s much more naturalistic in terms of what people and machines would see,” Isik explains.

Built-in teamwork

Such complementary approaches are the norm at CBMM. Postdocs and graduate students are required to have at least two advisors in two different labs. The NSF money is even assigned directly to postdoc and graduate student projects. This ensures that collaborations are baked into the center, Wilson explains. “If the idea is to create a new field in the science of intelligence, you can’t continue to support work the way it was done in the old fields—you have to create a new model.”

In other labs, students and postdocs blend imaging with cognitive science to understand how the brain represents physics—like the mass of an object it sees. Or they’re combining human, primate, mouse and computational experiments to better understand how the living brain represents new objects it encounters, and then building algorithms to test the resulting theories.

Boris Katz’s lab is in the fourth and final module, which focuses on figuring out how the brain’s visual intelligence ties into higher-level thinking, like goal planning, language, and abstract concepts. One project, led by MIT research scientist Andrei Barbu and Yen-Ling Kuo, in collaboration with Harvard cognitive scientist Liz Spelke, is attempting to uncover how humans and machines devise plans to navigate around complex and dangerous environments.

“CBMM gives us the opportunity to close the loop between machine learning, cognitive science, and neuroscience,” says Barbu. “The cognitive science informs better machine learning, which helps us understand how humans behave and that in turn points the way toward understanding the structure of the brain. All of this feeds back into creating more capable machines.”

A new field

Every summer, CBMM heads down to Woods Hole, Massachusetts, to deliver an intensive crash course on the science of intelligence to graduate students from across the country. It’s one of many education initiatives designed to spread CBMM’s approach and key to the goal of establishing a new field. The students who come to learn from these courses often find it as transformative as the CBMM faculty did when the center began.

Candace Ross was an undergraduate at Howard University when she got her first taste of CBMM at a summer course with Kreiman trying to model human memory in machine learning algorithms. “It was the best summer of my life,” she says. “There were so many concepts I didn’t know about and didn’t understand. We’d get back to the dorm at night and just sit around talking about science.”

Ross loved it so much that she spent a second summer at CBMM, and is now a third-year graduate student working with Katz and Barbu, teaching computers how to use vision and language to learn more like children. She’s since gone back to the summer programs, now as a teaching assistant. “CBMM is a research center,” says Ellen Hildreth, a computer scientist at Wellesley College who coordinates CBMM’s education programs. “But it also fosters a strong commitment to education, and that effort is helping to create a community of researchers around this new field.”

Quest for intelligence

CBMM has far to go in its mission to understand the mind, but there is good reason to believe that what CBMM started will continue well beyond the NSF-funded ten years.

This February, MIT announced a new institute-wide initiative called the MIT Intelligence Quest, or MIT IQ. It’s a massive interdisciplinary push to study human intelligence and create new tools based on that knowledge. It is also, says McGovern Institute Director Robert Desimone, a sign of the institute’s faith in what CBMM itself has so far accomplished. “The fact that MIT has made this big commitment in this area is an endorsement of the kind of view we’ve been promoting through CBMM,” he says.

MIT IQ consists of two linked entities: “The Core” and “The Bridge.” CBMM is part of the Core, which will advance the science and engineering of both human and machine intelligence. “This combination is unique to MIT,” explains Poggio, “and is designed to win not only Turing but also Nobel prizes.”

And more than that, points out BCS Department Head Jim DiCarlo, it’s also a return to CBMM’s very first mission. Before CBMM began, Poggio and a few other MIT scientists had tested the waters with a small, Institute-funded collaboration called the Intelligence Initiative (I^2), that welcomed all types of intelligence research–even business and organizational intelligence. MIT IQ re-opens that broader door. “In practice, we want to build a bigger tent now around the science of intelligence,” DiCarlo says.

For his part, Poggio finds the name particularly apt. “Because it is going to be a long-term quest,” he says. “Remember, if I’m right, this is the greatest problem in science. Understanding the mind is understanding the very tool we use to try to solve every other problem.”

School of Science announces Infinite Mile Awards for 2018

The MIT School of Science has announced seven winners of the Infinite Mile Award for 2018. The award will be presented at a luncheon this May in recognition of staff members whose accomplishments and contributions to their departments, laboratories, and centers far exceed expectations.

The 2018 Infinite Mile Award winners are:

Hristina Dineva, Department of Biology;

Theresa Cummings, Department of Mathematics;

Mary Gallagher, Department of Biology;

Jack McGlashing, Laboratory for Nuclear Science;

Sydney Miller, Department of Physics;

Miroslava Parsons, Department of Earth, Atmospheric and Planetary Sciences; and

Alexandra Sokhina, Simons Center for the Social Brain.

The awards luncheon will also honor winners of last fall’s Infinite Kilometer Award, which was established to highlight and reward the extraordinary — but often underrecognized — work of the school’s research staff and postdoctoral researchers.

The 2017 Infinite Kilometer winners are:

Rodrigo Garcia, McGovern Institute for Brain Research;

Lydia Herzel, Department of Biology;

Yutaro Iiyama, Laboratory for Nuclear Science;

Kendrick Jones, Picower Institute for Learning and Memory;

Matthew Musgrave, Laboratory for Nuclear Science;

Cody Siciliano, Picower Institute for Learning and Memory;

Peter Sudmant, Department of Biology; and

Ashley Watson, Picower Institute for Learning and Memory.

The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Ed Boyden receives 2018 Canada Gairdner International Award

Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT has been named a recipient of the 2018 Canada Gairdner International Award — Canada’s most prestigious scientific prize — for his role in the discovery of light-gated ion channels and optogenetics, a technology to control brain activity with light.

Boyden’s work has given neuroscientists the ability to precisely activate or silence brain cells to see how they contribute to — or possibly alleviate — brain disease. By optogenetically controlling brain cells, it has become possible to understand how specific patterns of brain activity might be used to quiet seizures, cancel out Parkinsonian tremors, and make other improvements to brain health.

Boyden is one of three scientists the Gairdner Foundation is honoring for this work. He shares the prize with Peter Hegemann from Humboldt University of Berlin and Karl Deisseroth from Stanford University.

“I am honored that the Gairdner Foundation has chosen our work in optogenetics for one of the most prestigious biology prizes awarded today,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research and an associate professor in the Media Lab, the Department of Brain and Cognitive Sciences, and the Department of Biological Engineering at MIT. “It represents a great collaborative body of work, and I feel excited that my angle of thinking like a physicist was able to contribute to biology.”

Boyden, along with fellow laureate Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity, while both were students in 2000. Together, they collaborated to demonstrate the first optical control of neural activity using microbial opsins in the summer of 2004, when Boyden was at Stanford. At MIT, Boyden’s team developed the first optogenetic silencing (2007), the first effective optogenetic silencing in live mammals (2010), noninvasive optogenetic silencing (2014), multicolor optogenetic control (2014), and temporally precise single-cell optogenetic control (2017).

In addition to his work with optogenetics, Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including expansion microscopy and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.

“We are thrilled Ed has been recognized with the prestigious Gairdner Award for his work in developing optogenetics,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has transformed neuroscience and biomedicine, and I am exceedingly proud of the contributions he has made to MIT and to the greater community of scientists worldwide.”

The Canada Gairdner International Awards, created in 1959, are given annually to recognize and reward the achievements of medical researchers whose work contributes significantly to the understanding of human biology and disease. The awards provide a $100,000 (CDN) prize to each scientist for their work. Each year, the five honorees of the International Awards are selected after a rigorous two-part review, with the winners voted by secret ballot by a medical advisory board composed of 33 eminent scientists from around the world.