National Academy of Sciences elects four MIT professors for 2018

Four MIT faculty members have been elected to the National Academy of Sciences (NAS) in recognition of their “distinguished and continuing achievements in original research.”

MIT’s four new NAS members are: Amy Finkelstein, the John and Jennie S. MacDonald Professor of Economics; Mehran Kardar, the Francis Friedman Professor of Physics; Xiao-Gang Wen, the Cecil and Ida Green Professor of Physics; and Feng Zhang, the Patricia and James Poitras ’63 Professor in Neuroscience at MIT, associate professor of brain and cognitive sciences and of biological engineering, and member of the McGovern Institute for Brain Research and the Broad Institute

The group was among 84 new members and 21 new foreign associates elected to the NAS. Membership in the NAS is one of the most significant honors given to academic researchers.

Amy Finkelstein

Finkelstein is the co-scientific director of J-PAL North America, the co-director of the Public Economics Program at the National Bureau of Economic Research, a member of the Institute of Medicine and the American Academy of Arts and Sciences, and a fellow of the Econometric Society.

She has received numerous awards and fellowships including the John Bates Clark Medal (2012), the American Society of Health Economists’ ASHEcon Medal (2014), a Presidential Early Career Award for Scientists and Engineers (2009), the American Economic Association’s Elaine Bennett Research Prize (2008) and a Sloan Research Fellowship (2007). She has also received awards for graduate student teaching (2012) and graduate student advising (2010) at MIT.

She is one of the two principal investigators for the Oregon Health Insurance Experiment, a randomized evaluation of the impact of extending Medicaid coverage to low income, uninsured adults.

Mehran Kardar

Kardar obtained a BA from Cambridge University in 1979 and a PhD in physics from MIT in 1983. He was a junior fellow of the Harvard Society of Fellows from 1983 to 1986 before returning to MIT as an assistant professor, and was promoted to full professor in 1996. He has been a visiting professor at a number of institutions including Catholic University in Belgium, Oxford University, the University of California at Santa Barbara, the University of California at Berkeley, and Ecole Normale Superieure in Paris.

His expertise is in statistical physics, and he has lectured extensively on this topic at MIT and in workshops at universities and institutes in France, the U.K., Switzerland, and Finland. He is the author of two books based on these lectures. In 2018 he was recognized by the American Association of Physics Teachers with the John David Jackson Excellence in Graduate Physics Education Award.

Kardar is a member of the founding board of the New England Complex Science Institute and the editorial board of Journal of Statistical Physics, and has helped organize Gordon Conference and KITP workshops. His awards include the Bergmann memorial research award, the A. P. Sloan Fellowship, the Presidential Young Investigator award, the Edgerton award for junior faculty achievements (MIT), and the Guggenheim Fellowship. He is a fellow of the American Physical Society and the American Academy of Arts and Sciences.

Xiao-Gang Wen

Wen received a BS in physics from University of Science and Technology of China in 1982 and a PhD in physics from Princeton University in 1987.

He studied superstring theory under theoretical physicist Edward Witten at Princeton University and later switched his research field to condensed matter physics while working with theoretical physicists Robert Schrieffer, Frank Wilczek, and Anthony Zee in the Institute for Theoretical Physics at the University of California at Santa Barbara (1987–1989). He became a five-year member of the Institute for Advanced Study at Princeton University in 1989 and joined MIT in 1991. Wen is the Cecil and Ida Green professor of Physics at MIT, a Distinguished Moore Scholar at Caltech, and a Distinguished Research Chair at the Perimeter Institute. In 2017 he received the Oliver E. Buckley Condensed Matter Physics Prize of the American Physical Society.

Wen’s main research area is condensed matter theory. His interests include strongly correlated electronic systems, topological order and quantum order, high-temperature superconductors, the origin and unification of elementary particles, and the Quantum Hall Effect and non-Abelian statistics.

Feng Zhang

Zhang is a bioengineer focused on developing tools to better understand nervous system function and disease. His lab applies these novel tools to interrogate gene function and study neuropsychiatric disorders in animal and stem cell models. Since joining MIT and the Broad Institute in January 2011, Zhang has pioneered the development of genome editing tools for use in eukaryotic cells — including human cells — from natural microbial CRISPR systems. He also developed a breakthrough technology called optogenetics with Karl Deisseroth at Stanford University and Edward Boyden, now of MIT.

Zhang joined MIT and the Broad Institute in 2011 and was awarded tenure in 2016. He received his BA in chemistry and physics from Harvard College and his PhD in chemistry from Stanford University. Zhang’s award include the Perl/UNC Prize in Neuroscience (2012, shared with Karl Deisseroth and Ed Boyden), the National Institutes of Health Director’s Pioneer Award (2012), the National Science Foundation’s Alan T. Waterman Award (2014), the Jacob Heskel Gabbay Award in Biotechnology and Medicine (2014, shared with Jennifer Doudna and Emmanuelle Charpentier), the Society for Neuroscience Young Investigator Award (2014), the Okazaki award, the Canada Gairdner International Award (shared with Doudna and Charpentier along with Philippe Horvath and Rodolphe Barrangou) and the 2016 Tang Prize (shared with Doudna and Charpentier).

Zhang is a founder of Editas Medicine, a genome editing company founded by world leaders in the fields of genome editing, protein engineering, and molecular and structural biology.

Calcium-based MRI sensor enables more sensitive brain imaging

MIT neuroscientists have developed a new magnetic resonance imaging (MRI) sensor that allows them to monitor neural activity deep within the brain by tracking calcium ions.

Because calcium ions are directly linked to neuronal firing — unlike the changes in blood flow detected by other types of MRI, which provide an indirect signal — this new type of sensing could allow researchers to link specific brain functions to their pattern of neuron activity, and to determine how distant brain regions communicate with each other during particular tasks.

“Concentrations of calcium ions are closely correlated with signaling events in the nervous system,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, an associate member of MIT’s McGovern Institute for Brain Research, and the senior author of the study. “We designed a probe with a molecular architecture that can sense relatively subtle changes in extracellular calcium that are correlated with neural activity.”

In tests in rats, the researchers showed that their calcium sensor can accurately detect changes in neural activity induced by chemical or electrical stimulation, deep within a part of the brain called the striatum.

MIT research associates Satoshi Okada and Benjamin Bartelle are the lead authors of the study, which appears in the April 30 issue of Nature Nanotechnology. Other authors include professor of brain and cognitive sciences and Picower Institute for Learning and Memory member Mriganka Sur, Research Associate Nan Li, postdoc Vincent Breton-Provencher, former postdoc Elisenda Rodriguez, Wellesley College undergraduate Jiyoung Lee, and high school student James Melican.

Tracking calcium

A mainstay of neuroscience research, MRI allows scientists to identify parts of the brain that are active during particular tasks. The most commonly used type, known as functional MRI, measures blood flow in the brain as an indirect marker of neural activity. Jasanoff and his colleagues wanted to devise a way to map patterns of neural activity with specificity and resolution that blood-flow-based MRI techniques can’t achieve.

“Methods that are able to map brain activity in deep tissue rely on changes in blood flow, and those are coupled to neural activity through many different physiological pathways,” Jasanoff says. “As a result, the signal you see in the end is often difficult to attribute to any particular underlying cause.”

Calcium ion flow, on the other hand, can be directly linked with neuron activity. When a neuron fires an electrical impulse, calcium ions rush into the cell. For about a decade, neuroscientists have been using fluorescent molecules to label calcium in the brain and image it with traditional microscopy. This technique allows them to precisely track neuron activity, but its use is limited to small areas of the brain.

The MIT team set out to find a way to image calcium using MRI, which enables much larger tissue volumes to be analyzed. To do that, they designed a new sensor that can detect subtle changes in calcium concentrations outside of cells and respond in a way that can be detected with MRI.

The new sensor consists of two types of particles that cluster together in the presence of calcium. One is a naturally occurring calcium-binding protein called synaptotagmin, and the other is a magnetic iron oxide nanoparticle coated in a lipid that can also bind to synaptotagmin, but only when calcium is present.

Calcium binding induces these particles to clump together, making them appear darker in an MRI image. High levels of calcium outside the neurons correlate with low neuron activity; when calcium concentrations drop, it means neurons in that area are firing electrical impulses.

Detecting brain activity

To test the sensors, the researchers injected them into the striatum of rats, a region that is involved in planning movement and learning new behaviors. They then gave the rats a chemical stimulus that induces short bouts of neural activity, and found that the calcium sensor reflected this activity.

They also found that the sensor picked up activity induced by electrical stimulation in a part of the brain involved in reward.

This approach provides a novel way to examine brain function, says Xin Yu, a research group leader at the Max Planck Institute for Biological Cybernetics in Tuebingen, Germany, who was not involved in the research.

“Although we have accumulated sufficient knowledge on intracellular calcium signaling in the past half-century, it has seldom been studied exactly how the dynamic changes in extracellular calcium contribute to brain function, or serve as an indicator of brain function,” Yu says. “When we are deciphering such a complicated and self-adapted system like the brain, every piece of information matters.”

The current version of the sensor responds within a few seconds of the initial brain stimulation, but the researchers are working on speeding that up. They are also trying to modify the sensor so that it can spread throughout a larger region of the brain and pass through the blood-brain barrier, which would make it possible to deliver the particles without injecting them directly to the test site.

With this kind of sensor, Jasanoff hopes to map patterns of neural activity with greater precision than is now possible. “You could imagine measuring calcium activity in different parts of the brain and trying to determine, for instance, how different types of sensory stimuli are encoded in different ways by the spatial pattern of neural activity that they induce,” he says.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Nancy Kanwisher receives 2018 Heineken Prize

Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience at MIT, has been named a recipient of the 2018 Heineken Prize — the Netherlands’ most prestigious scientific prize — for her work on the functional organization of the human brain.

Kanwisher, who is a professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research, uses neuroimaging to study the functional organization of the human brain. Over the last 20 years her lab has played a central role in the identification of regions of the human brain that are engaged in particular components of perception and cognition. Many of these regions are very specifically engaged in a single mental function such as perceiving faces, places, bodies, or words, or understanding the meanings of sentences or the mental states of others. These regions form a “neural portrait of the human mind,” according to Kanwisher, who has assembled dozens of videos for the general public on her website, NancysBrainTalks.

“Nancy Kanwisher is an exceptionally innovative and influential researcher in cognitive neuropsychology and the neurosciences,” according to the Royal Netherlands Academy of Arts and Sciences, the organization that selects the prizewinners. “She is being recognized with the 2018 C.L. de Carvalho-Heineken Prize for Cognitive Science for her highly original, meticulous and cogent research on the functional organization of the human brain.”

Kanwisher is among five international scientists who have been recognized by the academy with the biennial award. The other winners include biomedical scientist Peter Carmeliet  of the University of Leuven, biologist Paul Hebert of the University of Guelph, historian John R. McNeill of Georgetown University, and biophysicist Xiaowei Zhuang of Harvard University.

The Heineken Prizes, each worth $200,000, are named after Henry P. Heineken (1886-1971); Alfred H. Heineken (1923-2002) and Charlene de Carvalho-Heineken (1954), chair of the Dr H.P. Heineken Foundation and the Alfred Heineken Fondsen Foundation, which fund the prizes. The laureates are selected by juries assembled by the academy and made up of leading Dutch and foreign scientists and scholars.

The Heineken Prizes will be presented at an award ceremony on Sept. 27 in Amsterdam.

Eight from MIT elected to American Academy of Arts and Sciences for 2018

Eight MIT faculty members are among 213 leaders from academia, business, public affairs, the humanities, and the arts elected to the American Academy of Arts and Sciences, the academy announced today.

One of the nation’s most prestigious honorary societies, the academy is also a leading center for independent policy research. Members contribute to academy publications, as well as studies of science and technology policy, energy and global security, social policy and American institutions, the humanities and culture, and education.

Those elected from MIT this year are:

  • Alexei Borodin, professor of mathematics;
  • Gang Chen, the Carl Richard Soderberg Professor of Power Engineering and head of the Department of Mechanical Engineering;
  • Larry D. Guth; professor of mathematics;
  • Parag A. Pathak, the Jane Berkowitz Carlton and Dennis William Carlton Professor of Microeconomics;
  • Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics and head of the Department of Economics;
  • Leigh H. Royden, professor of earth, atmostpheric, and planetary sciences;
  • Sara Seager, the Class of 1941 Professor in the Department of Earth, Atmospheric and Planetary Sciences with a joint appointment in the Department of Physics; and
  • Feng Zhang, the James and Patricia Poitras Professor of Neuroscience within the departments of Brain and Cognitive Sciences and Biological Engineering, and an investigator at the McGovern Institute for Brain Research at MIT.

“This class of 2018 is a testament to the academy’s ability to both uphold our 238-year commitment to honor exceptional individuals and to recognize new expertise,” said Nancy C. Andrews, chair of the board of the American Academy.

“Membership in the academy is not only an honor, but also an opportunity and a responsibility,” added Jonathan Fanton, president of the American Academy. “Members can be inspired and engaged by connecting with one another and through academy projects dedicated to the common good. The intellect, creativity, and commitment of the 2018 class will enrich the work of the academy and the world in which we live.”

The new class will be inducted at a ceremony in October in Cambridge, Massachusetts.

Since its founding in 1780, the academy has elected leading “thinkers and doers” from each generation, including George Washington and Benjamin Franklin in the 18th century, Maria Mitchell and Daniel Webster in the 19th century, and Toni Morrison and Albert Einstein in the 20th century. The current membership includes more than 200 Nobel laureates and 100 Pulitzer Prize winners.

School of Science announces Infinite Mile Awards for 2018

The MIT School of Science has announced seven winners of the Infinite Mile Award for 2018. The award will be presented at a luncheon this May in recognition of staff members whose accomplishments and contributions to their departments, laboratories, and centers far exceed expectations.

The 2018 Infinite Mile Award winners are:

Hristina Dineva, Department of Biology;

Theresa Cummings, Department of Mathematics;

Mary Gallagher, Department of Biology;

Jack McGlashing, Laboratory for Nuclear Science;

Sydney Miller, Department of Physics;

Miroslava Parsons, Department of Earth, Atmospheric and Planetary Sciences; and

Alexandra Sokhina, Simons Center for the Social Brain.

The awards luncheon will also honor winners of last fall’s Infinite Kilometer Award, which was established to highlight and reward the extraordinary — but often underrecognized — work of the school’s research staff and postdoctoral researchers.

The 2017 Infinite Kilometer winners are:

Rodrigo Garcia, McGovern Institute for Brain Research;

Lydia Herzel, Department of Biology;

Yutaro Iiyama, Laboratory for Nuclear Science;

Kendrick Jones, Picower Institute for Learning and Memory;

Matthew Musgrave, Laboratory for Nuclear Science;

Cody Siciliano, Picower Institute for Learning and Memory;

Peter Sudmant, Department of Biology; and

Ashley Watson, Picower Institute for Learning and Memory.

Ed Boyden receives 2018 Canada Gairdner International Award

Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT has been named a recipient of the 2018 Canada Gairdner International Award — Canada’s most prestigious scientific prize — for his role in the discovery of light-gated ion channels and optogenetics, a technology to control brain activity with light.

Boyden’s work has given neuroscientists the ability to precisely activate or silence brain cells to see how they contribute to — or possibly alleviate — brain disease. By optogenetically controlling brain cells, it has become possible to understand how specific patterns of brain activity might be used to quiet seizures, cancel out Parkinsonian tremors, and make other improvements to brain health.

Boyden is one of three scientists the Gairdner Foundation is honoring for this work. He shares the prize with Peter Hegemann from Humboldt University of Berlin and Karl Deisseroth from Stanford University.

“I am honored that the Gairdner Foundation has chosen our work in optogenetics for one of the most prestigious biology prizes awarded today,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research and an associate professor in the Media Lab, the Department of Brain and Cognitive Sciences, and the Department of Biological Engineering at MIT. “It represents a great collaborative body of work, and I feel excited that my angle of thinking like a physicist was able to contribute to biology.”

Boyden, along with fellow laureate Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity, while both were students in 2000. Together, they collaborated to demonstrate the first optical control of neural activity using microbial opsins in the summer of 2004, when Boyden was at Stanford. At MIT, Boyden’s team developed the first optogenetic silencing (2007), the first effective optogenetic silencing in live mammals (2010), noninvasive optogenetic silencing (2014), multicolor optogenetic control (2014), and temporally precise single-cell optogenetic control (2017).

In addition to his work with optogenetics, Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including expansion microscopy and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.

“We are thrilled Ed has been recognized with the prestigious Gairdner Award for his work in developing optogenetics,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has transformed neuroscience and biomedicine, and I am exceedingly proud of the contributions he has made to MIT and to the greater community of scientists worldwide.”

The Canada Gairdner International Awards, created in 1959, are given annually to recognize and reward the achievements of medical researchers whose work contributes significantly to the understanding of human biology and disease. The awards provide a $100,000 (CDN) prize to each scientist for their work. Each year, the five honorees of the International Awards are selected after a rigorous two-part review, with the winners voted by secret ballot by a medical advisory board composed of 33 eminent scientists from around the world.

Study finds early signatures of the social brain

Humans use an ability known as theory of mind every time they make inferences about someone else’s mental state — what the other person believes, what they want, or why they are feeling happy, angry, or scared.

Behavioral studies have suggested that children begin succeeding at a key measure of this ability, known as the false-belief task, around age 4. However, a new study from MIT has found that the brain network that controls theory of mind has already formed in children as young as 3.

The MIT study is the first to use functional magnetic resonance imaging (fMRI) to scan the brains of children as young as age 3 as they perform a task requiring theory of mind — in this case, watching a short animated movie involving social interactions between two characters.

“The brain regions involved in theory-of-mind reasoning are behaving like a cohesive network, with similar responses to the movie, by age 3, which is before kids tend to pass explicit false-belief tasks,” says Hilary Richardson, an MIT graduate student and the lead author of the study.

Rebecca Saxe, an MIT professor of brain and cognitive sciences and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the March 12 issue of Nature Communications. Other authors are Indiana University graduate student Grace Lisandrelli and Wellesley College undergraduate Alexa Riobueno-Naylor.

Thinking about others

In 2003, Saxe first showed that theory of mind is seated in a brain region known as the right temporo-parietal junction (TPJ). The TPJ coordinates with other regions, including several parts of the prefrontal cortex, to form a network that is active when people think about the mental states of others.

The most commonly used test of theory of mind is the false-belief test, which probes whether the subject understands that other people may have beliefs that are not true. A classic example is the Sally-Anne test, in which a child is asked where Sally will look for a marble that she believes is in her own basket, but that Anne has moved to a different spot while Sally wasn’t looking. To pass, the subject must reply that Sally will look where she thinks the marble is (in her basket), not where it actually is.

Until now, neuroscientists had assumed that theory-of-mind studies involving fMRI brain scans could only be done with children at least 5 years of age, because the children need to be able to lie still in a scanner for about 20 minutes, listen to a series of stories, and answer questions about them.

Richardson wanted to study children younger than that, so that she could delve into what happens in the brain’s theory-of-mind network before the age of 5. To do that, she and Saxe came up with a new experimental protocol, which calls for scanning children while they watch a short movie that includes simple social interactions between two characters.

The animated movie they chose, called “Partly Cloudy,” has a plot that lends itself well to the experiment. It features Gus, a cloud who produces baby animals, and Peck, a stork whose job is to deliver the babies. Gus and Peck have some tense moments in their friendship because Gus produces baby alligators and porcupines, which are difficult to deliver, while other clouds create kittens and puppies. Peck is attacked by some of the fierce baby animals, and he isn’t sure if he wants to keep working for Gus.

“It has events that make you think about the characters’ mental states and events that make you think about their bodily states,” Richardson says.

The researchers spent about four years gathering data from 122 children ranging in age from 3 to 12 years. They scanned the entire brain, focusing on two distinct networks that have been well-characterized in adults: the theory-of-mind network and another network known as the pain matrix, which is active when thinking about another person’s physical state.

They also scanned 33 adults as they watched the movie so that they could identify scenes that provoke responses in either of those two networks. These scenes were dubbed theory-of-mind events and pain events. Scans of children revealed that even in 3-year-olds, the theory-of-mind and pain networks responded preferentially to the same events that the adult brains did.

“We see early signatures of this theory-of-mind network being wired up, so the theory-of-mind brain regions which we studied in adults are already really highly correlated with one another in 3-year-olds,” Richardson says.

The researchers also found that the responses in 3-year-olds were not as strong as in adults but gradually became stronger in the older children they scanned.

Patterns of development

The findings offer support for an existing hypothesis that says children develop theory of mind even before they can pass explicit false-belief tests, and that it continues to develop as they get older. Theory of mind encompasses many abilities, including more difficult skills such as understanding irony and assigning blame, which tend to develop later.

Another hypothesis is that children undergo a fairly sudden development of theory of mind around the age of 4 or 5, reflected by their success in the false-belief test. The MIT data, which do not show any dramatic changes in brain activity when children begin to succeed at the false-belief test, do not support that theory.

“Scientists have focused really intensely on the changes in children’s theory of mind that happen around age 4, when children get a better understanding of how people can have wrong or biased or misinformed beliefs,” Saxe says. “But really important changes in how we think about other minds happen long before, and long after, this famous landmark. Even 2-year-olds try to figure out why different people like different things — this might be why they get so interested in talking about everybody’s favorite colors. And even 9-year-olds are still learning about irony and negligence. Theory of mind seems to undergo a very long continuous developmental process, both in kids’ behaviors and in their brains.”

Now that the researchers have data on the typical trajectory of theory of mind development, they hope to scan the brains of autistic children to see whether there are any differences in how their theory-of-mind networks develop. Saxe’s lab is also studying children whose first exposure to language was delayed, to test the effects of early language on the development of theory of mind.

The research was funded by the National Science Foundation, the National Institutes of Health, and the David and Lucile Packard Foundation.

Study reveals how the brain tracks objects in motion

Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain’s approach is more complex.

The new findings suggest that in addition to tracking speed, the brain incorporates information about the rhythmic patterns of an object’s movement: for example, how long it takes a ball to complete one bounce. In their new study, the researchers found that people make much more accurate estimates when they have access to information about both the speed of a moving object and the timing of its rhythmic patterns.

“People get really good at this when they have both types of information available,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s like having input from multiple senses. The statistical knowledge that we have about the world we’re interacting with is richer when we use multiple senses.”

Jazayeri is the senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of March 5. The paper’s lead author is MIT graduate student Chia-Jung Chang.

Objects in motion

Much of the information we process about objects moving around us comes from visual tracking of the objects. Our brains can use information about an object’s speed and the distance it has to cover to calculate when it will reach a certain point. Jazayeri, who studies how the brain keeps time, was intrigued by the fact that much of the movement we see also has a rhythmic element, such as the bouncing of a ball.

“It occurred to us to ask, how can it be that the brain doesn’t use this information? It would seem very strange if all this richness of additional temporal structure is not part of the way we evaluate where things are around us and how things are going to happen,” Jazayeri says.

There are many other sensory processing tasks for which the brain uses multiple sources of input. For example, to interpret language, we use both the sound we hear and the movement of the speaker’s lips, if we can see them. When we touch an object, we estimate its size based on both what we see and what we feel with our fingers.

In the case of perceiving object motion, teasing out the role of rhythmic timing, as opposed to speed, can be difficult. “I can ask someone to do a task, but then how do I know if they’re using speed or they’re using time, if both of them are always available?” Jazayeri says.

To overcome that, the researchers devised a task in which they could control how much timing information was available. They measured performance in human volunteers as they performed the task.

During the task, the study participants watched a ball as it moved in a straight line. After traveling some distance, the ball went behind an obstacle, so the participants could no longer see it. They were asked to press a button at the time when they expected the ball to reappear.

Performance varied greatly depending on how much of the ball’s path was visible before it went behind the obstacle. If the participants saw the ball travel a very short distance before disappearing, they did not do well. As the distance before disappearance became longer, they were better able to calculate the ball’s speed, so their performance improved but eventually plateaued.

After that plateau, there was a significant jump in performance when the distance before disappearance grew until it was exactly the same as the width of the obstacle. In that case, when the path seen before disappearance was equal to the path the ball traveled behind the obstacle, the participants improved dramatically, because they knew that the time spent behind the obstacle would be the same as the time it took to reach the obstacle.

When the distance traveled to reach the obstacle became longer than the width of the obstacle, performance dropped again.

“It’s so important to have this extra information available, and when we have it, we use it,” Jazayeri says. “Temporal structure is so important that when you lose it, even at the expense of getting better visual information, people’s performance gets worse.”

Integrating information

The researchers also tested several computer models of how the brain performs this task, and found that the only model that could accurately replicate their experimental results was one in which the brain measures speed and timing in two different areas and then combines them.

Previous studies suggest that the brain performs timing estimates in premotor areas of the cortex, which plays a role in planning movement; speed, which usually requires visual input, is calculated in visual cortex. These inputs are likely combined in parts of the brain responsible for spatial attention and tracking objects in space, which occurs in the parietal cortex, Jazayeri says.

In future studies, Jazayeri hopes to measure brain activity in animals trained to perform the same task that human subjects did in this study. This could shed further light on where this processing takes place and could also reveal what happens in the brain when it makes incorrect estimates.

The research was funded by the McGovern Institute for Brain Research.

Viral tool traces long-term neuron activity

For the past decade, neuroscientists have been using a modified version of the rabies virus to label neurons and trace the connections between them. Although this technique has proven very useful, it has one major drawback: The virus is toxic to cells and can’t be used for studies longer than about two weeks.

Researchers at MIT and the Allen Institute for Brain Science have now developed a new version of this virus that stops replicating once it infects a cell, allowing it to deliver its genetic cargo without harming the cell. Using this technique, scientists should be able to study the infected neurons for several months, enabling longer-term studies of neuron functions and connections.

“With the first-generation vectors, the virus is replicating like crazy in the infected neurons, and that’s not good for them,” says Ian Wickersham, a principal research scientist at MIT’s McGovern Institute for Brain Research and the senior author of the new study. “With the second generation, infected cells look normal and act normal for at least four months — which was as long as we tracked them — and probably for the lifetime of the animal.”

Soumya Chatterjee of the Allen Institute is the lead author of the paper, which appears in the March 5 issue of Nature Neuroscience.

Viral tracing

Rabies viruses are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through junctions known as synapses. The viruses can also spread from the terminals of axons back to the cell body of the same neuron. Neuroscientists can engineer the viruses to carry genes for fluorescent proteins, which are useful for imaging, or for light-sensitive proteins that can be used to manipulate neuron activity.

In 2007, Wickersham demonstrated that a modified version of the rabies virus could be used to trace synapses between only directly connected neurons. Before that, researchers had been using the rabies virus for similar studies, but they were unable to keep it from spreading throughout the entire brain.

By deleting one of the virus’ five genes, which codes for a glycoprotein normally found on the surface of infected cells, Wickersham was able to create a version that can only spread to neurons in direct contact with the initially infected cell. This 2007 modification enabled scientists to perform “monosynaptic tracing,” a technique that allows them to identify connections between the infected neuron and any neuron that provides input to it.

This first generation of the modified rabies virus is also used for a related technique known as retrograde targeting, in which the virus can be injected into a cluster of axon terminals and then travel back to the cell bodies of those axons. This can help researchers discover the location of neurons that send impulses to the site of the virus injection.

Researchers at MIT have used retrograde targeting to identify populations of neurons of the basolateral amygdala that project to either the nucleus accumbens or the central medial amygdala. In that type of study, researchers can deliver optogenetic proteins that allow them to manipulate the activity of each population of cells. By selectively stimulating or shutting off these two separate cell populations, researchers can determine their functions.

Reduced toxicity

To create the second-generation version of this viral tool, Wickersham and his colleagues deleted the gene for the polymerase enzyme, which is necessary for transcribing viral genes. Without this gene, the virus becomes less harmful and infected cells can survive much longer. In the new study, the researchers found that neurons were still functioning normally for up to four months after infection.

“The second-generation virus enters a cell with its own few copies of the polymerase protein and is able to start transcribing its genes, including the transgene that we put into it. But then because it’s not able to make more copies of the polymerase, it doesn’t have this exponential takeover of the cell, and in practice it seems to be totally nontoxic,” Wickersham says.

The lack of polymerase also greatly reduces the expression of whichever gene the researchers engineer into the virus, so they need to employ a little extra genetic trickery to achieve their desired outcome. Instead of having the virus deliver a gene for a fluorescent or optogenetic protein, they engineer it to deliver a gene for an enzyme called Cre recombinase, which can delete target DNA sequences in the host cell’s genome.

This virus can then be used to study neurons in mice whose genomes have been engineered to include a gene that is turned on when the recombinase cuts out a small segment of DNA. Only a small amount of recombinase enzyme is needed to turn on the target gene, which could code for a fluorescent protein or another type of labeling molecule. The second-generation viruses can also work in regular mice if the researchers simultaneously inject another virus carrying a recombinase-activated gene for a fluorescent protein.

The new paper shows that the second-generation virus works well for retrograde labeling, not tracing synapses between cells, but the researchers have also now begun using it for monosynaptic tracing.

The research was funded by the National Institute of Mental Health, the National Institute on Aging, and the National Eye Institute.

Edward Boyden named inaugural Y. Eva Tan Professor in Neurotechnology

Edward S. Boyden, a member of MIT’s McGovern Institute for Brain Research and the Media Lab, and an associate professor of brain and cognitive sciences and biological engineering at MIT, has been appointed the inaugural Y. Eva Tan Professor in Neurotechnology. The new professorship has been established at the McGovern Institute by K. Lisa Yang in honor of her daughter Y. Eva Tan.

“We are thrilled Lisa has made a generous investment in neurotechnology and the McGovern Institute by creating this new chair,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has already transformed neuroscience and biomedicine, and this chair will help his team to further develop revolutionary tools that will have a profound impact on research worldwide.”

In 2017, Yang co-founded the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute. The center catalyzes interdisciplinary and cutting-edge research into the genetic, biological, and brain bases of autism spectrum disorders. In late 2017, Yang grew the center with the establishment of the endowed J. Douglas Tan Postdoctoral Research Fund, which supports talented postdocs in the lab of Poitras Professor of Neuroscience Guoping Feng.

“I am excited to further expand the Hock E. Tan and K. Lisa Yang Center for Autism Research and to support Ed and his team’s critical work,” says Yang. “Novel technology is the driving force behind much-needed breakthroughs in brain research — not just for individuals with autism, but for those living with all brain disorders. My daughter Eva and I are greatly pleased to recognize Ed’s talent and to contribute toward his future successes.”

Yang’s daughter agrees. “I’m so pleased this professorship will have a significant and lasting impact on MIT’s pioneering work in neurotechnology,” says Tan. “My family and I have always believed that advances in technology are what make all scientific progress possible, and I’m overjoyed that we can help enable amazing discoveries in the Boyden Lab through Ed’s appointment to this chair.”

Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including optogenetics, expansion microscopy, and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.

“I deeply appreciate the honor that comes with being named the first Y. Eva Tan Professor in Neurotechnology,” says Boyden. “This is a tremendous recognition of not only my team’s work, but the groundbreaking impact of the neurotechnology field.”

Boyden joined MIT in 2007 as an assistant professor at the Media Lab, and later was appointed as a joint professor in the departments of Brain and Cognitive Sciences and Biological Engineering and an investigator in the McGovern Institute. In 2011, he was named the Benesse Career Development Professor, and in 2013 he was awarded the AT&T Career Development Professorship. Seven years after arriving at MIT, he was awarded tenure. Boyden earned his BS and MEng from MIT in 1999 and his PhD in Neuroscience from Stanford University in 2005.