The quest to understand intelligence

McGovern investigators study intelligence to answer a practical question for both educators and computer scientists. Can intelligence be improved?

A nine-year-old girl, a contestant on a game show, is standing on stage. On a screen in front of her, there appears a twelve-digit number followed by a six-digit number. Her challenge is to divide the two numbers as fast as possible.

The timer begins. She is racing against three other contestants, two from China and one, like her, from Japan. Whoever answers first wins, but only if the answer is correct.

The show, called “The Brain,” is wildly popular in China, and attracts players who display their memory and concentration skills much the way American athletes demonstrate their physical skills in shows like “American Ninja Warrior.” After a few seconds, the girl slams the timer and gives the correct answer, faster than most people could have entered the numbers on a calculator.

The camera pans to a team of expert judges, including McGovern Director Robert Desimone, who had arrived in Nanjing just a few hours earlier. Desimone shakes his head in disbelief. The task appears to make extraordinary demands on working memory and rapid processing, but the girl explains that she solves it by visualizing an abacus in her mind—something she has practiced intensively.

The show raises an age-old question: What is intelligence, exactly?

The study of intelligence has a long and sometimes contentious history, but recently, neuroscientists have begun to dissect intelligence to understand the neural roots of the distinct cognitive skills that contribute to it. One key question is whether these skills can be improved individually with training and, if so, whether those improvements translate into overall intelligence gains. This research has practical implications for multiple domains, from brain science to education to artificial intelligence.

“The problem of intelligence is one of the great problems in science,” says Tomaso Poggio, a McGovern investigator and an expert on machine learning. “If we make progress in understanding intelligence, and if that helps us make progress in making ourselves smarter or in making machines that help us think better, we can solve all other problems more easily.”

Brain training 101

Many studies have reported positive results from brain training, and there is now a thriving industry devoted to selling tools and games such as Lumosity and BrainHQ. Yet the science behind brain training to improve intelligence remains controversial.

A case in point is the “n-back” working memory task, in which subjects are presented with a rapid sequence of letters or visual patterns, and must report whether the current item matches the last, last-but-one, last-but-two, and so on. The field of brain training received a boost in 2008 when a widely discussed study claimed that a few weeks of training on a challenging version of this task could boost fluid intelligence, the ability to solve novel problems. The report generated excitement and optimism when it first appeared, but several subsequent attempts to reproduce the findings have been unsuccessful.

Among those unable to confirm the result was McGovern Investigator John Gabrieli, who recruited 60 young adults and trained them forty minutes a day for four weeks on an n-back task similar to that of the original study.

Six months later, Gabrieli re-evaluated the participants. “They got amazingly better at the difficult task they practiced. We have great imaging data showing changes in brain activation as they performed the task from before to after,” says Gabrieli. “And yet, that didn’t help them do better on any other cognitive abilities we could measure, and we measured a lot of things.”

The results don’t completely rule out the value of n-back training, says Gabrieli. It may be more effective in children, or in populations with a lower average intelligence than the individuals (mostly college students) who were recruited for Gabrieli’s study. The prospect that training might help disadvantaged individuals holds strong appeal. “If you could raise the cognitive abilities of a child with autism, or a child who is struggling in school, the data tells us that their life would be a step better,” says Gabrieli. “It’s something you would wish for people, especially for those where something is holding them back from the expression of their other abilities.”

Music for the brain

The concept of early intervention is now being tested by Desimone, who has teamed with Chinese colleagues at the recently-established IDG/McGovern Institute at Beijing Normal University to explore the effect of music training on the cognitive abilities of young children.

The researchers recruited 100 children at a neighborhood kindergarten in Beijing, and provided them with a semester-long intervention, randomly assigning children either to music training or (as a control) to additional reading instruction. Unlike the so-called “Mozart Effect,” a scientifically unsubstantiated claim that passive listening to music increases intelligence, the new study requires active learning through daily practice. Several smaller studies have reported cognitive benefits from music training, and Desimone finds the idea plausible given that musical cognition involves several mental functions that are also implicated in intelligence. The study is nearly complete, and results are expected to emerge within a few months. “We’re also collecting data on brain activity, so if we see improvements in the kids who had music training, we’ll also be able to ask about its neural basis,” says Desimone. The results may also have immediate practical implications, since the study design reflects decisions that schools must make in determining how children spend their time. “Many schools are deciding to cut their arts and music programs to make room for more instruction in academic core subjects, so our study is relevant to real questions schools are facing.”

Intelligent classrooms

In another school-based study, Gabrieli’s group recently raised questions about the benefits of “teaching to the test.” In this study, postdoc Amy Finn evaluated over 1300 eighth-graders in the Boston public schools, some enrolled at traditional schools and others at charter schools that emphasize standardized test score improvements. The researchers wanted to find out whether raised test scores were accompanied by improvement of cognitive skills that are linked to intelligence. (Charter school students are selected by lottery, meaning that any results are unlikely to reflect preexisting differences between the two groups of students.) As expected, charter school students showed larger improvements in test scores (relative to their scores from 4 years earlier). But when Finn and her colleagues measured key aspects of intelligence, such as working memory, processing speed, and reasoning, they found no difference between the students who enrolled in charter schools and those who did not. “You can look at these skills as the building blocks of cognition. They are useful for reasoning in a novel situation, an ability that is really important for learning,” says Finn. “It’s surprising that school practices that increase achievement don’t also increase these building blocks.”

Gabrieli remains optimistic that it will eventually be possible to design scientifically based interventions that can raise children’s abilities. Allyson Mackey, a postdoc in his lab, is studying the use of games to exercise the cognitive skills in a classroom setting. As a graduate student at University of California, Berkeley, Mackey had studied the effects of games such as “Chocolate Fix,” in which players match shapes and flavors, represented by color, to positions in a grid based on hints, such as, “the upper left position is strawberry.”

These games gave children practice at thinking through and solving novel problems, and at the end of Mackey’s study, the students—from second through fourth grades—showed improved measures of skills associated with intelligence. “Our results suggest that these cognitive skills are specifically malleable, although we don’t yet know what the active ingredients were in this program,” says Mackey, who speaks of the interventions as if they were drugs, with dosages, efficacies and potentially synergistic combinations to be explored. Mackey is now working to identify the most promising interventions—those that boost cognitive abilities, work well in the classroom, and are engaging for kids—to try in Boston charter schools. “It’s just the beginning of a three-year process to methodically test interventions to see if they work,” she says.

Brain training…for machines

While Desimone, Gabrieli and their colleagues look for ways to raise human intelligence, Poggio, who directs the MIT-based Center for Brains, Minds and Machines, is trying to endow computers with more human-like intelligence. Computers can already match human performance on some specific tasks such as chess. Programs such as Apple’s “Siri” can mimic human speech interpretation, not perfectly but well enough to be useful. Computer vision programs are approaching human performance at rapid object recognitions, and one such system, developed by one of Poggio’s former postdocs, is now being used to assist car drivers. “The last decade has been pretty magical for intelligent computer systems,” says Poggio.

Like children, these intelligent systems learn from past experience. But compared to humans or other animals, machines tend to be very slow learners. For example, the visual system for automobiles was trained by presenting it with millions of images—traffic light, pedestrian, and so on—that had already been labeled by humans. “You would never present so many examples to a child,” says Poggio. “One of our big challenges is to understand how to make algorithms in computers learn with many fewer examples, to make them learn more like children do.”

To accomplish this and other goals of machine intelligence, Poggio suspects that the work being done by Desimone, Gabrieli and others to understand the neural basis of intelligence will be critical. But he is not expecting any single breakthrough that will make everything fall into place. “A century ago,” he says, “scientists pondered the problem of life, as if ‘life’—what we now call biology—were just one problem. The science of intelligence is like biology. It’s a lot of problems, and a lot of breakthroughs will have to come before a machine appears that is as intelligent as we are.”

Ed Boyden receives 2018 Canada Gairdner International Award

Ed Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT has been named a recipient of the 2018 Canada Gairdner International Award — Canada’s most prestigious scientific prize — for his role in the discovery of light-gated ion channels and optogenetics, a technology to control brain activity with light.

Boyden’s work has given neuroscientists the ability to precisely activate or silence brain cells to see how they contribute to — or possibly alleviate — brain disease. By optogenetically controlling brain cells, it has become possible to understand how specific patterns of brain activity might be used to quiet seizures, cancel out Parkinsonian tremors, and make other improvements to brain health.

Boyden is one of three scientists the Gairdner Foundation is honoring for this work. He shares the prize with Peter Hegemann from Humboldt University of Berlin and Karl Deisseroth from Stanford University.

“I am honored that the Gairdner Foundation has chosen our work in optogenetics for one of the most prestigious biology prizes awarded today,” says Boyden, who is also a member of MIT’s McGovern Institute for Brain Research and an associate professor in the Media Lab, the Department of Brain and Cognitive Sciences, and the Department of Biological Engineering at MIT. “It represents a great collaborative body of work, and I feel excited that my angle of thinking like a physicist was able to contribute to biology.”

Boyden, along with fellow laureate Karl Deisseroth, brainstormed about how microbial opsins could be used to mediate optical control of neural activity, while both were students in 2000. Together, they collaborated to demonstrate the first optical control of neural activity using microbial opsins in the summer of 2004, when Boyden was at Stanford. At MIT, Boyden’s team developed the first optogenetic silencing (2007), the first effective optogenetic silencing in live mammals (2010), noninvasive optogenetic silencing (2014), multicolor optogenetic control (2014), and temporally precise single-cell optogenetic control (2017).

In addition to his work with optogenetics, Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including expansion microscopy and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.

“We are thrilled Ed has been recognized with the prestigious Gairdner Award for his work in developing optogenetics,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has transformed neuroscience and biomedicine, and I am exceedingly proud of the contributions he has made to MIT and to the greater community of scientists worldwide.”

The Canada Gairdner International Awards, created in 1959, are given annually to recognize and reward the achievements of medical researchers whose work contributes significantly to the understanding of human biology and disease. The awards provide a $100,000 (CDN) prize to each scientist for their work. Each year, the five honorees of the International Awards are selected after a rigorous two-part review, with the winners voted by secret ballot by a medical advisory board composed of 33 eminent scientists from around the world.

Study finds early signatures of the social brain

Humans use an ability known as theory of mind every time they make inferences about someone else’s mental state — what the other person believes, what they want, or why they are feeling happy, angry, or scared.

Behavioral studies have suggested that children begin succeeding at a key measure of this ability, known as the false-belief task, around age 4. However, a new study from MIT has found that the brain network that controls theory of mind has already formed in children as young as 3.

The MIT study is the first to use functional magnetic resonance imaging (fMRI) to scan the brains of children as young as age 3 as they perform a task requiring theory of mind — in this case, watching a short animated movie involving social interactions between two characters.

“The brain regions involved in theory-of-mind reasoning are behaving like a cohesive network, with similar responses to the movie, by age 3, which is before kids tend to pass explicit false-belief tasks,” says Hilary Richardson, an MIT graduate student and the lead author of the study.

Rebecca Saxe, an MIT professor of brain and cognitive sciences and an associate member of MIT’s McGovern Institute for Brain Research, is the senior author of the paper, which appears in the March 12 issue of Nature Communications. Other authors are Indiana University graduate student Grace Lisandrelli and Wellesley College undergraduate Alexa Riobueno-Naylor.

Thinking about others

In 2003, Saxe first showed that theory of mind is seated in a brain region known as the right temporo-parietal junction (TPJ). The TPJ coordinates with other regions, including several parts of the prefrontal cortex, to form a network that is active when people think about the mental states of others.

The most commonly used test of theory of mind is the false-belief test, which probes whether the subject understands that other people may have beliefs that are not true. A classic example is the Sally-Anne test, in which a child is asked where Sally will look for a marble that she believes is in her own basket, but that Anne has moved to a different spot while Sally wasn’t looking. To pass, the subject must reply that Sally will look where she thinks the marble is (in her basket), not where it actually is.

Until now, neuroscientists had assumed that theory-of-mind studies involving fMRI brain scans could only be done with children at least 5 years of age, because the children need to be able to lie still in a scanner for about 20 minutes, listen to a series of stories, and answer questions about them.

Richardson wanted to study children younger than that, so that she could delve into what happens in the brain’s theory-of-mind network before the age of 5. To do that, she and Saxe came up with a new experimental protocol, which calls for scanning children while they watch a short movie that includes simple social interactions between two characters.

The animated movie they chose, called “Partly Cloudy,” has a plot that lends itself well to the experiment. It features Gus, a cloud who produces baby animals, and Peck, a stork whose job is to deliver the babies. Gus and Peck have some tense moments in their friendship because Gus produces baby alligators and porcupines, which are difficult to deliver, while other clouds create kittens and puppies. Peck is attacked by some of the fierce baby animals, and he isn’t sure if he wants to keep working for Gus.

“It has events that make you think about the characters’ mental states and events that make you think about their bodily states,” Richardson says.

The researchers spent about four years gathering data from 122 children ranging in age from 3 to 12 years. They scanned the entire brain, focusing on two distinct networks that have been well-characterized in adults: the theory-of-mind network and another network known as the pain matrix, which is active when thinking about another person’s physical state.

They also scanned 33 adults as they watched the movie so that they could identify scenes that provoke responses in either of those two networks. These scenes were dubbed theory-of-mind events and pain events. Scans of children revealed that even in 3-year-olds, the theory-of-mind and pain networks responded preferentially to the same events that the adult brains did.

“We see early signatures of this theory-of-mind network being wired up, so the theory-of-mind brain regions which we studied in adults are already really highly correlated with one another in 3-year-olds,” Richardson says.

The researchers also found that the responses in 3-year-olds were not as strong as in adults but gradually became stronger in the older children they scanned.

Patterns of development

The findings offer support for an existing hypothesis that says children develop theory of mind even before they can pass explicit false-belief tests, and that it continues to develop as they get older. Theory of mind encompasses many abilities, including more difficult skills such as understanding irony and assigning blame, which tend to develop later.

Another hypothesis is that children undergo a fairly sudden development of theory of mind around the age of 4 or 5, reflected by their success in the false-belief test. The MIT data, which do not show any dramatic changes in brain activity when children begin to succeed at the false-belief test, do not support that theory.

“Scientists have focused really intensely on the changes in children’s theory of mind that happen around age 4, when children get a better understanding of how people can have wrong or biased or misinformed beliefs,” Saxe says. “But really important changes in how we think about other minds happen long before, and long after, this famous landmark. Even 2-year-olds try to figure out why different people like different things — this might be why they get so interested in talking about everybody’s favorite colors. And even 9-year-olds are still learning about irony and negligence. Theory of mind seems to undergo a very long continuous developmental process, both in kids’ behaviors and in their brains.”

Now that the researchers have data on the typical trajectory of theory of mind development, they hope to scan the brains of autistic children to see whether there are any differences in how their theory-of-mind networks develop. Saxe’s lab is also studying children whose first exposure to language was delayed, to test the effects of early language on the development of theory of mind.

The research was funded by the National Science Foundation, the National Institutes of Health, and the David and Lucile Packard Foundation.

Study reveals how the brain tracks objects in motion

Catching a bouncing ball or hitting a ball with a racket requires estimating when the ball will arrive. Neuroscientists have long thought that the brain does this by calculating the speed of the moving object. However, a new study from MIT shows that the brain’s approach is more complex.

The new findings suggest that in addition to tracking speed, the brain incorporates information about the rhythmic patterns of an object’s movement: for example, how long it takes a ball to complete one bounce. In their new study, the researchers found that people make much more accurate estimates when they have access to information about both the speed of a moving object and the timing of its rhythmic patterns.

“People get really good at this when they have both types of information available,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences and a member of MIT’s McGovern Institute for Brain Research. “It’s like having input from multiple senses. The statistical knowledge that we have about the world we’re interacting with is richer when we use multiple senses.”

Jazayeri is the senior author of the study, which appears in the Proceedings of the National Academy of Sciences the week of March 5. The paper’s lead author is MIT graduate student Chia-Jung Chang.

Objects in motion

Much of the information we process about objects moving around us comes from visual tracking of the objects. Our brains can use information about an object’s speed and the distance it has to cover to calculate when it will reach a certain point. Jazayeri, who studies how the brain keeps time, was intrigued by the fact that much of the movement we see also has a rhythmic element, such as the bouncing of a ball.

“It occurred to us to ask, how can it be that the brain doesn’t use this information? It would seem very strange if all this richness of additional temporal structure is not part of the way we evaluate where things are around us and how things are going to happen,” Jazayeri says.

There are many other sensory processing tasks for which the brain uses multiple sources of input. For example, to interpret language, we use both the sound we hear and the movement of the speaker’s lips, if we can see them. When we touch an object, we estimate its size based on both what we see and what we feel with our fingers.

In the case of perceiving object motion, teasing out the role of rhythmic timing, as opposed to speed, can be difficult. “I can ask someone to do a task, but then how do I know if they’re using speed or they’re using time, if both of them are always available?” Jazayeri says.

To overcome that, the researchers devised a task in which they could control how much timing information was available. They measured performance in human volunteers as they performed the task.

During the task, the study participants watched a ball as it moved in a straight line. After traveling some distance, the ball went behind an obstacle, so the participants could no longer see it. They were asked to press a button at the time when they expected the ball to reappear.

Performance varied greatly depending on how much of the ball’s path was visible before it went behind the obstacle. If the participants saw the ball travel a very short distance before disappearing, they did not do well. As the distance before disappearance became longer, they were better able to calculate the ball’s speed, so their performance improved but eventually plateaued.

After that plateau, there was a significant jump in performance when the distance before disappearance grew until it was exactly the same as the width of the obstacle. In that case, when the path seen before disappearance was equal to the path the ball traveled behind the obstacle, the participants improved dramatically, because they knew that the time spent behind the obstacle would be the same as the time it took to reach the obstacle.

When the distance traveled to reach the obstacle became longer than the width of the obstacle, performance dropped again.

“It’s so important to have this extra information available, and when we have it, we use it,” Jazayeri says. “Temporal structure is so important that when you lose it, even at the expense of getting better visual information, people’s performance gets worse.”

Integrating information

The researchers also tested several computer models of how the brain performs this task, and found that the only model that could accurately replicate their experimental results was one in which the brain measures speed and timing in two different areas and then combines them.

Previous studies suggest that the brain performs timing estimates in premotor areas of the cortex, which plays a role in planning movement; speed, which usually requires visual input, is calculated in visual cortex. These inputs are likely combined in parts of the brain responsible for spatial attention and tracking objects in space, which occurs in the parietal cortex, Jazayeri says.

In future studies, Jazayeri hopes to measure brain activity in animals trained to perform the same task that human subjects did in this study. This could shed further light on where this processing takes place and could also reveal what happens in the brain when it makes incorrect estimates.

The research was funded by the McGovern Institute for Brain Research.

Viral tool traces long-term neuron activity

For the past decade, neuroscientists have been using a modified version of the rabies virus to label neurons and trace the connections between them. Although this technique has proven very useful, it has one major drawback: The virus is toxic to cells and can’t be used for studies longer than about two weeks.

Researchers at MIT and the Allen Institute for Brain Science have now developed a new version of this virus that stops replicating once it infects a cell, allowing it to deliver its genetic cargo without harming the cell. Using this technique, scientists should be able to study the infected neurons for several months, enabling longer-term studies of neuron functions and connections.

“With the first-generation vectors, the virus is replicating like crazy in the infected neurons, and that’s not good for them,” says Ian Wickersham, a principal research scientist at MIT’s McGovern Institute for Brain Research and the senior author of the new study. “With the second generation, infected cells look normal and act normal for at least four months — which was as long as we tracked them — and probably for the lifetime of the animal.”

Soumya Chatterjee of the Allen Institute is the lead author of the paper, which appears in the March 5 issue of Nature Neuroscience.

Viral tracing

Rabies viruses are well-suited for tracing neural connections because they have evolved to spread from neuron to neuron through junctions known as synapses. The viruses can also spread from the terminals of axons back to the cell body of the same neuron. Neuroscientists can engineer the viruses to carry genes for fluorescent proteins, which are useful for imaging, or for light-sensitive proteins that can be used to manipulate neuron activity.

In 2007, Wickersham demonstrated that a modified version of the rabies virus could be used to trace synapses between only directly connected neurons. Before that, researchers had been using the rabies virus for similar studies, but they were unable to keep it from spreading throughout the entire brain.

By deleting one of the virus’ five genes, which codes for a glycoprotein normally found on the surface of infected cells, Wickersham was able to create a version that can only spread to neurons in direct contact with the initially infected cell. This 2007 modification enabled scientists to perform “monosynaptic tracing,” a technique that allows them to identify connections between the infected neuron and any neuron that provides input to it.

This first generation of the modified rabies virus is also used for a related technique known as retrograde targeting, in which the virus can be injected into a cluster of axon terminals and then travel back to the cell bodies of those axons. This can help researchers discover the location of neurons that send impulses to the site of the virus injection.

Researchers at MIT have used retrograde targeting to identify populations of neurons of the basolateral amygdala that project to either the nucleus accumbens or the central medial amygdala. In that type of study, researchers can deliver optogenetic proteins that allow them to manipulate the activity of each population of cells. By selectively stimulating or shutting off these two separate cell populations, researchers can determine their functions.

Reduced toxicity

To create the second-generation version of this viral tool, Wickersham and his colleagues deleted the gene for the polymerase enzyme, which is necessary for transcribing viral genes. Without this gene, the virus becomes less harmful and infected cells can survive much longer. In the new study, the researchers found that neurons were still functioning normally for up to four months after infection.

“The second-generation virus enters a cell with its own few copies of the polymerase protein and is able to start transcribing its genes, including the transgene that we put into it. But then because it’s not able to make more copies of the polymerase, it doesn’t have this exponential takeover of the cell, and in practice it seems to be totally nontoxic,” Wickersham says.

The lack of polymerase also greatly reduces the expression of whichever gene the researchers engineer into the virus, so they need to employ a little extra genetic trickery to achieve their desired outcome. Instead of having the virus deliver a gene for a fluorescent or optogenetic protein, they engineer it to deliver a gene for an enzyme called Cre recombinase, which can delete target DNA sequences in the host cell’s genome.

This virus can then be used to study neurons in mice whose genomes have been engineered to include a gene that is turned on when the recombinase cuts out a small segment of DNA. Only a small amount of recombinase enzyme is needed to turn on the target gene, which could code for a fluorescent protein or another type of labeling molecule. The second-generation viruses can also work in regular mice if the researchers simultaneously inject another virus carrying a recombinase-activated gene for a fluorescent protein.

The new paper shows that the second-generation virus works well for retrograde labeling, not tracing synapses between cells, but the researchers have also now begun using it for monosynaptic tracing.

The research was funded by the National Institute of Mental Health, the National Institute on Aging, and the National Eye Institute.

Edward Boyden named inaugural Y. Eva Tan Professor in Neurotechnology

Edward S. Boyden, a member of MIT’s McGovern Institute for Brain Research and the Media Lab, and an associate professor of brain and cognitive sciences and biological engineering at MIT, has been appointed the inaugural Y. Eva Tan Professor in Neurotechnology. The new professorship has been established at the McGovern Institute by K. Lisa Yang in honor of her daughter Y. Eva Tan.

“We are thrilled Lisa has made a generous investment in neurotechnology and the McGovern Institute by creating this new chair,” says Robert Desimone, director of the McGovern Institute. “Ed’s body of work has already transformed neuroscience and biomedicine, and this chair will help his team to further develop revolutionary tools that will have a profound impact on research worldwide.”

In 2017, Yang co-founded the Hock E. Tan and K. Lisa Yang Center for Autism Research at the McGovern Institute. The center catalyzes interdisciplinary and cutting-edge research into the genetic, biological, and brain bases of autism spectrum disorders. In late 2017, Yang grew the center with the establishment of the endowed J. Douglas Tan Postdoctoral Research Fund, which supports talented postdocs in the lab of Poitras Professor of Neuroscience Guoping Feng.

“I am excited to further expand the Hock E. Tan and K. Lisa Yang Center for Autism Research and to support Ed and his team’s critical work,” says Yang. “Novel technology is the driving force behind much-needed breakthroughs in brain research — not just for individuals with autism, but for those living with all brain disorders. My daughter Eva and I are greatly pleased to recognize Ed’s talent and to contribute toward his future successes.”

Yang’s daughter agrees. “I’m so pleased this professorship will have a significant and lasting impact on MIT’s pioneering work in neurotechnology,” says Tan. “My family and I have always believed that advances in technology are what make all scientific progress possible, and I’m overjoyed that we can help enable amazing discoveries in the Boyden Lab through Ed’s appointment to this chair.”

Boyden has pioneered the development of many transformative technologies that image, record, and manipulate complex systems, including optogenetics, expansion microscopy, and robotic patch clamping. He has received numerous awards for this work, including the Breakthrough Prize in Life Sciences (2016), the BBVA Foundation Frontiers of Knowledge Award (2015), the Carnegie Prize in Mind and Body Sciences (2015), the Grete Lundbeck European Brain Prize (2013), and the Perl-UNC Neuroscience prize (2011). Boyden is an elected member of the American Academy of Arts and Sciences and the National Academy of Inventors.

“I deeply appreciate the honor that comes with being named the first Y. Eva Tan Professor in Neurotechnology,” says Boyden. “This is a tremendous recognition of not only my team’s work, but the groundbreaking impact of the neurotechnology field.”

Boyden joined MIT in 2007 as an assistant professor at the Media Lab, and later was appointed as a joint professor in the departments of Brain and Cognitive Sciences and Biological Engineering and an investigator in the McGovern Institute. In 2011, he was named the Benesse Career Development Professor, and in 2013 he was awarded the AT&T Career Development Professorship. Seven years after arriving at MIT, he was awarded tenure. Boyden earned his BS and MEng from MIT in 1999 and his PhD in Neuroscience from Stanford University in 2005.

Seeing the brain’s electrical activity

Neurons in the brain communicate via rapid electrical impulses that allow the brain to coordinate behavior, sensation, thoughts, and emotion. Scientists who want to study this electrical activity usually measure these signals with electrodes inserted into the brain, a task that is notoriously difficult and time-consuming.

MIT researchers have now come up with a completely different approach to measuring electrical activity in the brain, which they believe will prove much easier and more informative. They have developed a light-sensitive protein that can be embedded into neuron membranes, where it emits a fluorescent signal that indicates how much voltage a particular cell is experiencing. This could allow scientists to study how neurons behave, millisecond by millisecond, as the brain performs a particular function.

“If you put an electrode in the brain, it’s like trying to understand a phone conversation by hearing only one person talk,” says Edward Boyden, an associate professor of biological engineering and brain and cognitive sciences at MIT. “Now we can record the neural activity of many cells in a neural circuit and hear them as they talk to each other.”

Boyden, who is also a member of MIT’s Media Lab, McGovern Institute for Brain Research, and Koch Institute for Integrative Cancer Research, and an HHMI-Simons Faculty Scholar, is the senior author of the study, which appears in the Feb. 26 issue of Nature Chemical Biology. The paper’s lead authors are MIT postdocs Kiryl Piatkevich and Erica Jung.

Imaging voltage

For the past two decades, scientists have sought a way to monitor electrical activity in the brain through imaging instead of recording with electrodes. Finding fluorescent molecules that can be used for this kind of imaging has been difficult; not only do the proteins have to be very sensitive to changes in voltage, they must also respond quickly and be resistant to photobleaching (fading that can be caused by exposure to light).

Boyden and his colleagues came up with a new strategy for finding a molecule that would fulfill everything on this wish list: They built a robot that could screen millions of proteins, generated through a process called directed protein evolution, for the traits they wanted.

“You take a gene, then you make millions and millions of mutant genes, and finally you pick the ones that work the best,” Boyden says. “That’s the way that evolution works in nature, but now we’re doing it in the lab with robots so we can pick out the genes with the properties we want.”

The researchers made 1.5 million mutated versions of a light-sensitive protein called QuasAr2, which was previously engineered by Adam Cohen’s lab at Harvard University. (That work, in turn, was based on the molecule Arch, which the Boyden lab reported in 2010.) The researchers put each of those genes into mammalian cells (one mutant per cell), then grew the cells in lab dishes and used an automated microscope to take pictures of the cells. The robot was able to identify cells with proteins that met the criteria the researchers were looking for, the most important being the protein’s location within the cell and its brightness.

The research team then selected five of the best candidates and did another round of mutation, generating 8 million new candidates. The robot picked out the seven best of these, which the researchers then narrowed down to one top performer, which they called Archon1.

Mapping the brain

A key feature of Archon1 is that once the gene is delivered into a cell, the Archon1 protein embeds itself into the cell membrane, which is the best place to obtain an accurate measurement of a cell’s voltage.

Using this protein, the researchers were able to measure electrical activity in mouse brain tissue, as well as in brain cells of zebrafish larvae and the worm Caenorhabditis elegans. The latter two organisms are transparent, so it is easy to expose them to light and image the resulting fluorescence. When the cells are exposed to a certain wavelength of reddish-orange light, the protein sensor emits a longer wavelength of red light, and the brightness of the light corresponds to the voltage of that cell at a given moment in time.

The researchers also showed that Archon1 can be used in conjunction with light-sensitive proteins that are commonly used to silence or stimulate neuron activity — these are known as optogenetic proteins — as long as those proteins respond to colors other than red. In experiments with C. elegans, the researchers demonstrated that they could stimulate one neuron using blue light and then use Archon1 to measure the resulting effect in neurons that receive input from that cell.

Cohen, the Harvard professor who developed the predecessor to Archon1, says the new MIT protein brings scientists closer to the goal of imaging millisecond-timescale electrical activity in live brains.

“Traditionally, it has been excruciatingly labor-intensive to engineer fluorescent voltage indicators, because each mutant had to be cloned individually and then tested through a slow, manual patch-clamp electrophysiology measurement. The Boyden lab developed a very clever high-throughput screening approach to this problem,” says Cohen, who was not involved in this study. “Their new reporter looks really great in fish and worms and in brain slices. I’m eager to try it in my lab.”

The researchers are now working on using this technology to measure brain activity in mice as they perform various tasks, which Boyden believes should allow them to map neural circuits and discover how they produce specific behaviors.

“We will be able to watch a neural computation happen,” he says. “Over the next five years or so we’re going to try to solve some small brain circuits completely. Such results might take a step toward understanding what a thought or a feeling actually is.”

The research was funded by the HHMI-Simons Faculty Scholars Program, the IET Harvey Prize, the MIT Media Lab, the New York Stem Cell Foundation Robertson Award, the Open Philanthropy Project, John Doerr, the Human Frontier Science Program, the Department of Defense, the National Science Foundation, and the National Institutes of Health, including an NIH Director’s Pioneer Award.

Researchers advance CRISPR-based tool for diagnosing disease

The team that first unveiled the rapid, inexpensive, highly sensitive CRISPR-based diagnostic tool called SHERLOCK has greatly enhanced the tool’s power, and has developed a miniature paper test that allows results to be seen with the naked eye — without the need for expensive equipment.

 

The SHERLOCK team developed a simple paper strip to display test results for a single genetic signature, borrowing from the visual cues common in pregnancy tests. After dipping the paper strip into a processed sample, a line appears, indicating whether the target molecule was detected or not.

This new feature helps pave the way for field use, such as during an outbreak. The team has also increased the sensitivity of SHERLOCK and added the capacity to accurately quantify the amount of target in a sample and test for multiple targets at once. All together, these advancements accelerate SHERLOCK’s ability to quickly and precisely detect genetic signatures — including pathogens and tumor DNA — in samples.

Described today in Science, the innovations build on the team’s earlier version of SHERLOCK (shorthand for Specific High Sensitivity Reporter unLOCKing) and add to a growing field of research that harnesses CRISPR systems for uses beyond gene editing. The work, led by researchers from the Broad Institute of MIT and Harvard and from MIT, has the potential for a transformative effect on research and global public health.

“SHERLOCK provides an inexpensive, easy-to-use, and sensitive diagnostic method for detecting nucleic acid material — and that can mean a virus, tumor DNA, and many other targets,” said senior author Feng Zhang, a core institute member of the Broad Institute, an investigator at the McGovern Institute, and the James and Patricia Poitras ’63 Professor in Neuroscience and associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering at MIT. “The SHERLOCK improvements now give us even more diagnostic information and put us closer to a tool that can be deployed in real-world applications.”

The researchers previously showcased SHERLOCK’s utility for a range of applications. In the new study, the team uses SHERLOCK to detect cell-free tumor DNA in blood samples from lung cancer patients and to detect synthetic Zika and Dengue virus simultaneously, in addition to other demonstrations.

Clear results on a paper strip

“The new paper readout for SHERLOCK lets you see whether your target was present in the sample, without instrumentation,” said co-first author Jonathan Gootenberg, a Harvard graduate student in Zhang’s lab as well as the lab of Broad core institute member Aviv Regev. “This moves us much closer to a field-ready diagnostic.”

The team envisions a wide range of uses for SHERLOCK, thanks to its versatility in nucleic acid target detection. “The technology demonstrates potential for many health care applications, including diagnosing infections in patients and detecting mutations that confer drug resistance or cause cancer, but it can also be used for industrial and agricultural applications where monitoring steps along the supply chain can reduce waste and improve safety,” added Zhang.

At the core of SHERLOCK’s success is a CRISPR-associated protein called Cas13, which can be programmed to bind to a specific piece of RNA. Cas13’s target can be any genetic sequence, including viral genomes, genes that confer antibiotic resistance in bacteria, or mutations that cause cancer. In certain circumstances, once Cas13 locates and cuts its specified target, the enzyme goes into overdrive, indiscriminately cutting other RNA nearby. To create SHERLOCK, the team harnessed this “off-target” activity and turned it to their advantage, engineering the system to be compatible with both DNA and RNA.

SHERLOCK’s diagnostic potential relies on additional strands of synthetic RNA that are used to create a signal after being cleaved. Cas13 will chop up this RNA after it hits its original target, releasing the signaling molecule, which results in a readout that indicates the presence or absence of the target.

Multiple targets and increased sensitivity

The SHERLOCK platform can now be adapted to test for multiple targets. SHERLOCK initially could only detect one nucleic acid sequence at a time, but now one analysis can give fluorescent signals for up to four different targets at once — meaning less sample is required to run through diagnostic panels. For example, the new version of SHERLOCK can determine in a single reaction whether a sample contains Zika or dengue virus particles, which both cause similar symptoms in patients. The platform uses Cas13 and Cas12a (previously known as Cpf1) enzymes from different species of bacteria to generate the additional signals.

SHERLOCK’s second iteration also uses an additional CRISPR-associated enzyme to amplify its detection signal, making the tool more sensitive than its predecessor. “With the original SHERLOCK, we were detecting a single molecule in a microliter, but now we can achieve 100-fold greater sensitivity,” explained co-first author Omar Abudayyeh, an MIT graduate student in Zhang’s lab at Broad. “That’s especially important for applications like detecting cell-free tumor DNA in blood samples, where the concentration of your target might be extremely low. This next generation of features help make SHERLOCK a more precise system.”

The authors have made their reagents available to the academic community through Addgene and their software tools can be accessed via the Zhang lab website and GitHub.

This study was supported in part by the National Institutes of Health and the Defense Threat Reduction Agency.

Beyond the 30 Million Word Gap

At the McGovern Institute for Brain Research at MIT, John Gabrieli’s lab is studying how exposure to language may influence brain function in children.

Back-and-forth exchanges boost children’s brain response to language

A landmark 1995 study found that children from higher-income families hear about 30 million more words during their first three years of life than children from lower-income families. This “30-million-word gap” correlates with significant differences in tests of vocabulary, language development, and reading comprehension.

MIT cognitive scientists have now found that conversation between an adult and a child appears to change the child’s brain, and that this back-and-forth conversation is actually more critical to language development than the word gap. In a study of children between the ages of 4 and 6, they found that differences in the number of “conversational turns” accounted for a large portion of the differences in brain physiology and language skills that they found among the children. This finding applied to children regardless of parental income or education.

The findings suggest that parents can have considerable influence over their children’s language and brain development by simply engaging them in conversation, the researchers say.

“The important thing is not just to talk to your child, but to talk with your child. It’s not just about dumping language into your child’s brain, but to actually carry on a conversation with them,” says Rachel Romeo, a graduate student at Harvard and MIT and the lead author of the paper, which appears in the Feb. 14 online edition of Psychological Science.

Using functional magnetic resonance imaging (fMRI), the researchers identified differences in the brain’s response to language that correlated with the number of conversational turns. In children who experienced more conversation, Broca’s area, a part of the brain involved in speech production and language processing, was much more active while they listened to stories. This brain activation then predicted children’s scores on language assessments, fully explaining the income-related differences in children’s language skills.

“The really novel thing about our paper is that it provides the first evidence that family conversation at home is associated with brain development in children. It’s almost magical how parental conversation appears to influence the biological growth of the brain,” says John Gabrieli, the Grover M. Hermann Professor in Health Sciences and Technology, a professor of brain and cognitive sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Beyond the word gap

Before this study, little was known about how the “word gap” might translate into differences in the brain. The MIT team set out to find these differences by comparing the brain scans of children from different socioeconomic backgrounds.

As part of the study, the researchers used a system called Language Environment Analysis (LENA) to record every word spoken or heard by each child. Parents who agreed to have their children participate in the study were told to have their children wear the recorder for two days, from the time they woke up until they went to bed.

The recordings were then analyzed by a computer program that yielded three measurements: the number of words spoken by the child, the number of words spoken to the child, and the number of times that the child and an adult took a “conversational turn” — a back-and-forth exchange initiated by either one.

The researchers found that the number of conversational turns correlated strongly with the children’s scores on standardized tests of language skill, including vocabulary, grammar, and verbal reasoning. The number of conversational turns also correlated with more activity in Broca’s area, when the children listened to stories while inside an fMRI scanner.

These correlations were much stronger than those between the number of words heard and language scores, and between the number of words heard and activity in Broca’s area.

This result aligns with other recent findings, Romeo says, “but there’s still a popular notion that there’s this 30-million-word gap, and we need to dump words into these kids — just talk to them all day long, or maybe sit them in front of a TV that will talk to them. However, the brain data show that it really seems to be this interactive dialogue that is more strongly related to neural processing.”

The researchers believe interactive conversation gives children more of an opportunity to practice their communication skills, including the ability to understand what another person is trying to say and to respond in an appropriate way.

While children from higher-income families were exposed to more language on average, children from lower-income families who experienced a high number of conversational turns had language skills and Broca’s area brain activity similar to those of children who came from higher-income families.

“In our analysis, the conversational turn-taking seems like the thing that makes a difference, regardless of socioeconomic status. Such turn-taking occurs more often in families from a higher socioeconomic status, but children coming from families with lesser income or parental education showed the same benefits from conversational turn-taking,” Gabrieli says.

Taking action

The researchers hope their findings will encourage parents to engage their young children in more conversation. Although this study was done in children age 4 to 6, this type of turn-taking can also be done with much younger children, by making sounds back and forth or making faces, the researchers say.

“One of the things we’re excited about is that it feels like a relatively actionable thing because it’s specific. That doesn’t mean it’s easy for less educated families, under greater economic stress, to have more conversation with their child. But at the same time, it’s a targeted, specific action, and there may be ways to promote or encourage that,” Gabrieli says.

Roberta Golinkoff, a professor of education at the University of Delaware School of Education, says the new study presents an important finding that adds to the evidence that it’s not just the number of words children hear that is significant for their language development.

“You can talk to a child until you’re blue in the face, but if you’re not engaging with the child and having a conversational duet about what the child is interested in, you’re not going to give the child the language processing skills that they need,” says Golinkoff, who was not involved in the study. “If you can get the child to participate, not just listen, that will allow the child to have a better language outcome.”

The MIT researchers now hope to study the effects of possible interventions that incorporate more conversation into young children’s lives. These could include technological assistance, such as computer programs that can converse or electronic reminders to parents to engage their children in conversation.

The research was funded by the Walton Family Foundation, the National Institute of Child Health and Human Development, a Harvard Mind Brain Behavior Grant, and a gift from David Pun Chan.